2026-04-05 00:00:07.985487 | Job console starting 2026-04-05 00:00:08.024554 | Updating git repos 2026-04-05 00:00:08.345637 | Cloning repos into workspace 2026-04-05 00:00:08.580397 | Restoring repo states 2026-04-05 00:00:08.596047 | Merging changes 2026-04-05 00:00:08.596067 | Checking out repos 2026-04-05 00:00:09.186946 | Preparing playbooks 2026-04-05 00:00:10.304161 | Running Ansible setup 2026-04-05 00:00:19.477079 | PRE-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/pre.yaml@main] 2026-04-05 00:00:21.015485 | 2026-04-05 00:00:21.015615 | PLAY [Base pre] 2026-04-05 00:00:21.047810 | 2026-04-05 00:00:21.047932 | TASK [Setup log path fact] 2026-04-05 00:00:21.093266 | orchestrator | ok 2026-04-05 00:00:21.134985 | 2026-04-05 00:00:21.135115 | TASK [set-zuul-log-path-fact : Set log path for a build] 2026-04-05 00:00:21.192898 | orchestrator | ok 2026-04-05 00:00:21.212875 | 2026-04-05 00:00:21.213560 | TASK [emit-job-header : Print job information] 2026-04-05 00:00:21.308031 | # Job Information 2026-04-05 00:00:21.308175 | Ansible Version: 2.16.14 2026-04-05 00:00:21.308206 | Job: testbed-deploy-current-in-a-nutshell-with-tempest-ubuntu-24.04 2026-04-05 00:00:21.308295 | Pipeline: periodic-midnight 2026-04-05 00:00:21.308757 | Executor: 521e9411259a 2026-04-05 00:00:21.309190 | Triggered by: https://github.com/osism/testbed 2026-04-05 00:00:21.309238 | Event ID: 1928b94beaae403ebd11dd0b50186fab 2026-04-05 00:00:21.325615 | 2026-04-05 00:00:21.325721 | LOOP [emit-job-header : Print node information] 2026-04-05 00:00:21.650726 | orchestrator | ok: 2026-04-05 00:00:21.650929 | orchestrator | # Node Information 2026-04-05 00:00:21.650960 | orchestrator | Inventory Hostname: orchestrator 2026-04-05 00:00:21.651039 | orchestrator | Hostname: zuul-static-regiocloud-infra-1 2026-04-05 00:00:21.651133 | orchestrator | Username: zuul-testbed04 2026-04-05 00:00:21.651161 | orchestrator | Distro: Debian 12.13 2026-04-05 00:00:21.651187 | orchestrator | Provider: static-testbed 2026-04-05 00:00:21.651210 | orchestrator | Region: 2026-04-05 00:00:21.651232 | orchestrator | Label: testbed-orchestrator 2026-04-05 00:00:21.651253 | orchestrator | Product Name: OpenStack Nova 2026-04-05 00:00:21.651500 | orchestrator | Interface IP: 81.163.193.140 2026-04-05 00:00:21.681703 | 2026-04-05 00:00:21.681819 | TASK [log-inventory : Ensure Zuul Ansible directory exists] 2026-04-05 00:00:23.533845 | orchestrator -> localhost | changed 2026-04-05 00:00:23.541510 | 2026-04-05 00:00:23.541630 | TASK [log-inventory : Copy ansible inventory to logs dir] 2026-04-05 00:00:26.373929 | orchestrator -> localhost | changed 2026-04-05 00:00:26.395467 | 2026-04-05 00:00:26.395578 | TASK [add-build-sshkey : Check to see if ssh key was already created for this build] 2026-04-05 00:00:27.025976 | orchestrator -> localhost | ok 2026-04-05 00:00:27.031673 | 2026-04-05 00:00:27.031764 | TASK [add-build-sshkey : Create a new key in workspace based on build UUID] 2026-04-05 00:00:27.079305 | orchestrator | ok 2026-04-05 00:00:27.113360 | orchestrator | included: /var/lib/zuul/builds/4d3ed407a6834c7fa69ad074083dc131/trusted/project_1/github.com/osism/openinfra-zuul-jobs/roles/add-build-sshkey/tasks/create-key-and-replace.yaml 2026-04-05 00:00:27.143336 | 2026-04-05 00:00:27.143430 | TASK [add-build-sshkey : Create Temp SSH key] 2026-04-05 00:00:30.713959 | orchestrator -> localhost | Generating public/private rsa key pair. 2026-04-05 00:00:30.714125 | orchestrator -> localhost | Your identification has been saved in /var/lib/zuul/builds/4d3ed407a6834c7fa69ad074083dc131/work/4d3ed407a6834c7fa69ad074083dc131_id_rsa 2026-04-05 00:00:30.714155 | orchestrator -> localhost | Your public key has been saved in /var/lib/zuul/builds/4d3ed407a6834c7fa69ad074083dc131/work/4d3ed407a6834c7fa69ad074083dc131_id_rsa.pub 2026-04-05 00:00:30.714177 | orchestrator -> localhost | The key fingerprint is: 2026-04-05 00:00:30.714199 | orchestrator -> localhost | SHA256:S8RKZCO0VgXfwXk8Hi8AoKQTpT4Bq3lV9hAoznoarE0 zuul-build-sshkey 2026-04-05 00:00:30.714217 | orchestrator -> localhost | The key's randomart image is: 2026-04-05 00:00:30.714245 | orchestrator -> localhost | +---[RSA 3072]----+ 2026-04-05 00:00:30.714264 | orchestrator -> localhost | |. .o=.%=oo.o | 2026-04-05 00:00:30.714282 | orchestrator -> localhost | | o.=.X * .+.= | 2026-04-05 00:00:30.714299 | orchestrator -> localhost | |.o=.= . = .+ + | 2026-04-05 00:00:30.714315 | orchestrator -> localhost | |.oo= . o o . | 2026-04-05 00:00:30.714331 | orchestrator -> localhost | |+.+ . S . | 2026-04-05 00:00:30.714354 | orchestrator -> localhost | |ooE. . . | 2026-04-05 00:00:30.714371 | orchestrator -> localhost | |.* . | 2026-04-05 00:00:30.714388 | orchestrator -> localhost | |o . | 2026-04-05 00:00:30.714404 | orchestrator -> localhost | | | 2026-04-05 00:00:30.714421 | orchestrator -> localhost | +----[SHA256]-----+ 2026-04-05 00:00:30.714464 | orchestrator -> localhost | ok: Runtime: 0:00:02.213993 2026-04-05 00:00:30.720347 | 2026-04-05 00:00:30.720432 | TASK [add-build-sshkey : Remote setup ssh keys (linux)] 2026-04-05 00:00:30.748161 | orchestrator | ok 2026-04-05 00:00:30.796761 | orchestrator | included: /var/lib/zuul/builds/4d3ed407a6834c7fa69ad074083dc131/trusted/project_1/github.com/osism/openinfra-zuul-jobs/roles/add-build-sshkey/tasks/remote-linux.yaml 2026-04-05 00:00:30.831892 | 2026-04-05 00:00:30.831996 | TASK [add-build-sshkey : Remove previously added zuul-build-sshkey] 2026-04-05 00:00:30.881165 | orchestrator | skipping: Conditional result was False 2026-04-05 00:00:30.888315 | 2026-04-05 00:00:30.888406 | TASK [add-build-sshkey : Enable access via build key on all nodes] 2026-04-05 00:00:31.849569 | orchestrator | changed 2026-04-05 00:00:31.854734 | 2026-04-05 00:00:31.864986 | TASK [add-build-sshkey : Make sure user has a .ssh] 2026-04-05 00:00:32.211734 | orchestrator | ok 2026-04-05 00:00:32.220866 | 2026-04-05 00:00:32.220956 | TASK [add-build-sshkey : Install build private key as SSH key on all nodes] 2026-04-05 00:00:32.793494 | orchestrator | ok 2026-04-05 00:00:32.798334 | 2026-04-05 00:00:32.798413 | TASK [add-build-sshkey : Install build public key as SSH key on all nodes] 2026-04-05 00:00:33.325381 | orchestrator | ok 2026-04-05 00:00:33.332588 | 2026-04-05 00:00:33.332666 | TASK [add-build-sshkey : Remote setup ssh keys (windows)] 2026-04-05 00:00:33.374982 | orchestrator | skipping: Conditional result was False 2026-04-05 00:00:33.381246 | 2026-04-05 00:00:33.381335 | TASK [remove-zuul-sshkey : Remove master key from local agent] 2026-04-05 00:00:34.610142 | orchestrator -> localhost | changed 2026-04-05 00:00:34.630665 | 2026-04-05 00:00:34.630763 | TASK [add-build-sshkey : Add back temp key] 2026-04-05 00:00:35.849323 | orchestrator -> localhost | Identity added: /var/lib/zuul/builds/4d3ed407a6834c7fa69ad074083dc131/work/4d3ed407a6834c7fa69ad074083dc131_id_rsa (zuul-build-sshkey) 2026-04-05 00:00:35.849582 | orchestrator -> localhost | ok: Runtime: 0:00:00.023648 2026-04-05 00:00:35.857164 | 2026-04-05 00:00:35.857250 | TASK [add-build-sshkey : Verify we can still SSH to all nodes] 2026-04-05 00:00:36.326598 | orchestrator | ok 2026-04-05 00:00:36.335471 | 2026-04-05 00:00:36.335585 | TASK [add-build-sshkey : Verify we can still SSH to all nodes (windows)] 2026-04-05 00:00:36.372719 | orchestrator | skipping: Conditional result was False 2026-04-05 00:00:36.445550 | 2026-04-05 00:00:36.445646 | TASK [start-zuul-console : Start zuul_console daemon.] 2026-04-05 00:00:36.987751 | orchestrator | ok 2026-04-05 00:00:36.998626 | 2026-04-05 00:00:36.998725 | TASK [validate-host : Define zuul_info_dir fact] 2026-04-05 00:00:37.026467 | orchestrator | ok 2026-04-05 00:00:37.055584 | 2026-04-05 00:00:37.055694 | TASK [validate-host : Ensure Zuul Ansible directory exists] 2026-04-05 00:00:37.899820 | orchestrator -> localhost | ok 2026-04-05 00:00:37.905597 | 2026-04-05 00:00:37.905680 | TASK [validate-host : Collect information about the host] 2026-04-05 00:00:39.384373 | orchestrator | ok 2026-04-05 00:00:39.421003 | 2026-04-05 00:00:39.421126 | TASK [validate-host : Sanitize hostname] 2026-04-05 00:00:39.560778 | orchestrator | ok 2026-04-05 00:00:39.567327 | 2026-04-05 00:00:39.567431 | TASK [validate-host : Write out all ansible variables/facts known for each host] 2026-04-05 00:00:41.387122 | orchestrator -> localhost | changed 2026-04-05 00:00:41.393778 | 2026-04-05 00:00:41.393875 | TASK [validate-host : Collect information about zuul worker] 2026-04-05 00:00:42.006205 | orchestrator | ok 2026-04-05 00:00:42.010562 | 2026-04-05 00:00:42.010652 | TASK [validate-host : Write out all zuul information for each host] 2026-04-05 00:00:43.269921 | orchestrator -> localhost | changed 2026-04-05 00:00:43.284369 | 2026-04-05 00:00:43.284464 | TASK [prepare-workspace-log : Start zuul_console daemon.] 2026-04-05 00:00:43.679417 | orchestrator | ok 2026-04-05 00:00:43.692439 | 2026-04-05 00:00:43.692567 | TASK [prepare-workspace-log : Synchronize src repos to workspace directory.] 2026-04-05 00:02:00.306117 | orchestrator | changed: 2026-04-05 00:02:00.307520 | orchestrator | .d..t...... src/ 2026-04-05 00:02:00.307591 | orchestrator | .d..t...... src/github.com/ 2026-04-05 00:02:00.307618 | orchestrator | .d..t...... src/github.com/osism/ 2026-04-05 00:02:00.307641 | orchestrator | .d..t...... src/github.com/osism/ansible-collection-commons/ 2026-04-05 00:02:00.307663 | orchestrator | RedHat.yml 2026-04-05 00:02:00.322365 | orchestrator | .L..t...... src/github.com/osism/ansible-collection-commons/roles/repository/tasks/CentOS.yml -> RedHat.yml 2026-04-05 00:02:00.322382 | orchestrator | RedHat.yml 2026-04-05 00:02:00.322434 | orchestrator | = 2.2.0"... 2026-04-05 00:02:11.098104 | orchestrator | - Finding latest version of hashicorp/null... 2026-04-05 00:02:11.113040 | orchestrator | - Finding terraform-provider-openstack/openstack versions matching ">= 1.53.0"... 2026-04-05 00:02:11.250337 | orchestrator | - Installing hashicorp/local v2.8.0... 2026-04-05 00:02:11.823927 | orchestrator | - Installed hashicorp/local v2.8.0 (signed, key ID 0C0AF313E5FD9F80) 2026-04-05 00:02:11.885895 | orchestrator | - Installing hashicorp/null v3.2.4... 2026-04-05 00:02:12.411777 | orchestrator | - Installed hashicorp/null v3.2.4 (signed, key ID 0C0AF313E5FD9F80) 2026-04-05 00:02:12.681746 | orchestrator | - Installing terraform-provider-openstack/openstack v3.4.0... 2026-04-05 00:02:14.208234 | orchestrator | - Installed terraform-provider-openstack/openstack v3.4.0 (signed, key ID 4F80527A391BEFD2) 2026-04-05 00:02:14.208290 | orchestrator | 2026-04-05 00:02:14.208298 | orchestrator | Providers are signed by their developers. 2026-04-05 00:02:14.208303 | orchestrator | If you'd like to know more about provider signing, you can read about it here: 2026-04-05 00:02:14.208308 | orchestrator | https://opentofu.org/docs/cli/plugins/signing/ 2026-04-05 00:02:14.208314 | orchestrator | 2026-04-05 00:02:14.208318 | orchestrator | OpenTofu has created a lock file .terraform.lock.hcl to record the provider 2026-04-05 00:02:14.208333 | orchestrator | selections it made above. Include this file in your version control repository 2026-04-05 00:02:14.208337 | orchestrator | so that OpenTofu can guarantee to make the same selections by default when 2026-04-05 00:02:14.208342 | orchestrator | you run "tofu init" in the future. 2026-04-05 00:02:14.208798 | orchestrator | 2026-04-05 00:02:14.208808 | orchestrator | OpenTofu has been successfully initialized! 2026-04-05 00:02:14.208814 | orchestrator | 2026-04-05 00:02:14.208819 | orchestrator | You may now begin working with OpenTofu. Try running "tofu plan" to see 2026-04-05 00:02:14.208823 | orchestrator | any changes that are required for your infrastructure. All OpenTofu commands 2026-04-05 00:02:14.208830 | orchestrator | should now work. 2026-04-05 00:02:14.208834 | orchestrator | 2026-04-05 00:02:14.208842 | orchestrator | If you ever set or change modules or backend configuration for OpenTofu, 2026-04-05 00:02:14.208846 | orchestrator | rerun this command to reinitialize your working directory. If you forget, other 2026-04-05 00:02:14.208859 | orchestrator | commands will detect it and remind you to do so if necessary. 2026-04-05 00:02:14.411569 | orchestrator | Created and switched to workspace "ci"! 2026-04-05 00:02:14.411608 | orchestrator | 2026-04-05 00:02:14.411614 | orchestrator | You're now on a new, empty workspace. Workspaces isolate their state, 2026-04-05 00:02:14.411619 | orchestrator | so if you run "tofu plan" OpenTofu will not see any existing state 2026-04-05 00:02:14.411650 | orchestrator | for this configuration. 2026-04-05 00:02:14.532086 | orchestrator | ci.auto.tfvars 2026-04-05 00:02:14.579306 | orchestrator | default_custom.tf 2026-04-05 00:02:17.441584 | orchestrator | data.openstack_networking_network_v2.public: Reading... 2026-04-05 00:02:18.027096 | orchestrator | data.openstack_networking_network_v2.public: Read complete after 1s [id=e6be7364-bfd8-4de7-8120-8f41c69a139a] 2026-04-05 00:02:18.322092 | orchestrator | 2026-04-05 00:02:18.322146 | orchestrator | OpenTofu used the selected providers to generate the following execution 2026-04-05 00:02:18.322152 | orchestrator | plan. Resource actions are indicated with the following symbols: 2026-04-05 00:02:18.322157 | orchestrator | + create 2026-04-05 00:02:18.322162 | orchestrator | <= read (data resources) 2026-04-05 00:02:18.322166 | orchestrator | 2026-04-05 00:02:18.322170 | orchestrator | OpenTofu will perform the following actions: 2026-04-05 00:02:18.322175 | orchestrator | 2026-04-05 00:02:18.322179 | orchestrator | # data.openstack_images_image_v2.image will be read during apply 2026-04-05 00:02:18.322183 | orchestrator | # (config refers to values not yet known) 2026-04-05 00:02:18.322187 | orchestrator | <= data "openstack_images_image_v2" "image" { 2026-04-05 00:02:18.322191 | orchestrator | + checksum = (known after apply) 2026-04-05 00:02:18.322195 | orchestrator | + created_at = (known after apply) 2026-04-05 00:02:18.322199 | orchestrator | + file = (known after apply) 2026-04-05 00:02:18.322203 | orchestrator | + id = (known after apply) 2026-04-05 00:02:18.322223 | orchestrator | + metadata = (known after apply) 2026-04-05 00:02:18.322227 | orchestrator | + min_disk_gb = (known after apply) 2026-04-05 00:02:18.322231 | orchestrator | + min_ram_mb = (known after apply) 2026-04-05 00:02:18.322235 | orchestrator | + most_recent = true 2026-04-05 00:02:18.322239 | orchestrator | + name = (known after apply) 2026-04-05 00:02:18.322243 | orchestrator | + protected = (known after apply) 2026-04-05 00:02:18.322247 | orchestrator | + region = (known after apply) 2026-04-05 00:02:18.322253 | orchestrator | + schema = (known after apply) 2026-04-05 00:02:18.322257 | orchestrator | + size_bytes = (known after apply) 2026-04-05 00:02:18.322261 | orchestrator | + tags = (known after apply) 2026-04-05 00:02:18.322265 | orchestrator | + updated_at = (known after apply) 2026-04-05 00:02:18.322269 | orchestrator | } 2026-04-05 00:02:18.322272 | orchestrator | 2026-04-05 00:02:18.322276 | orchestrator | # data.openstack_images_image_v2.image_node will be read during apply 2026-04-05 00:02:18.322280 | orchestrator | # (config refers to values not yet known) 2026-04-05 00:02:18.322284 | orchestrator | <= data "openstack_images_image_v2" "image_node" { 2026-04-05 00:02:18.322288 | orchestrator | + checksum = (known after apply) 2026-04-05 00:02:18.322292 | orchestrator | + created_at = (known after apply) 2026-04-05 00:02:18.322295 | orchestrator | + file = (known after apply) 2026-04-05 00:02:18.322299 | orchestrator | + id = (known after apply) 2026-04-05 00:02:18.322303 | orchestrator | + metadata = (known after apply) 2026-04-05 00:02:18.322306 | orchestrator | + min_disk_gb = (known after apply) 2026-04-05 00:02:18.322310 | orchestrator | + min_ram_mb = (known after apply) 2026-04-05 00:02:18.322314 | orchestrator | + most_recent = true 2026-04-05 00:02:18.322318 | orchestrator | + name = (known after apply) 2026-04-05 00:02:18.322321 | orchestrator | + protected = (known after apply) 2026-04-05 00:02:18.322325 | orchestrator | + region = (known after apply) 2026-04-05 00:02:18.322329 | orchestrator | + schema = (known after apply) 2026-04-05 00:02:18.322332 | orchestrator | + size_bytes = (known after apply) 2026-04-05 00:02:18.322336 | orchestrator | + tags = (known after apply) 2026-04-05 00:02:18.322340 | orchestrator | + updated_at = (known after apply) 2026-04-05 00:02:18.322343 | orchestrator | } 2026-04-05 00:02:18.322347 | orchestrator | 2026-04-05 00:02:18.322351 | orchestrator | # local_file.MANAGER_ADDRESS will be created 2026-04-05 00:02:18.322355 | orchestrator | + resource "local_file" "MANAGER_ADDRESS" { 2026-04-05 00:02:18.322359 | orchestrator | + content = (known after apply) 2026-04-05 00:02:18.322363 | orchestrator | + content_base64sha256 = (known after apply) 2026-04-05 00:02:18.322366 | orchestrator | + content_base64sha512 = (known after apply) 2026-04-05 00:02:18.322370 | orchestrator | + content_md5 = (known after apply) 2026-04-05 00:02:18.322374 | orchestrator | + content_sha1 = (known after apply) 2026-04-05 00:02:18.322378 | orchestrator | + content_sha256 = (known after apply) 2026-04-05 00:02:18.322381 | orchestrator | + content_sha512 = (known after apply) 2026-04-05 00:02:18.322385 | orchestrator | + directory_permission = "0777" 2026-04-05 00:02:18.322389 | orchestrator | + file_permission = "0644" 2026-04-05 00:02:18.322392 | orchestrator | + filename = ".MANAGER_ADDRESS.ci" 2026-04-05 00:02:18.322396 | orchestrator | + id = (known after apply) 2026-04-05 00:02:18.322400 | orchestrator | } 2026-04-05 00:02:18.322404 | orchestrator | 2026-04-05 00:02:18.322407 | orchestrator | # local_file.id_rsa_pub will be created 2026-04-05 00:02:18.322411 | orchestrator | + resource "local_file" "id_rsa_pub" { 2026-04-05 00:02:18.322415 | orchestrator | + content = (known after apply) 2026-04-05 00:02:18.322418 | orchestrator | + content_base64sha256 = (known after apply) 2026-04-05 00:02:18.322422 | orchestrator | + content_base64sha512 = (known after apply) 2026-04-05 00:02:18.322426 | orchestrator | + content_md5 = (known after apply) 2026-04-05 00:02:18.322430 | orchestrator | + content_sha1 = (known after apply) 2026-04-05 00:02:18.322433 | orchestrator | + content_sha256 = (known after apply) 2026-04-05 00:02:18.322440 | orchestrator | + content_sha512 = (known after apply) 2026-04-05 00:02:18.322444 | orchestrator | + directory_permission = "0777" 2026-04-05 00:02:18.322448 | orchestrator | + file_permission = "0644" 2026-04-05 00:02:18.322458 | orchestrator | + filename = ".id_rsa.ci.pub" 2026-04-05 00:02:18.322462 | orchestrator | + id = (known after apply) 2026-04-05 00:02:18.322465 | orchestrator | } 2026-04-05 00:02:18.322469 | orchestrator | 2026-04-05 00:02:18.322473 | orchestrator | # local_file.inventory will be created 2026-04-05 00:02:18.322477 | orchestrator | + resource "local_file" "inventory" { 2026-04-05 00:02:18.322480 | orchestrator | + content = (known after apply) 2026-04-05 00:02:18.322484 | orchestrator | + content_base64sha256 = (known after apply) 2026-04-05 00:02:18.322488 | orchestrator | + content_base64sha512 = (known after apply) 2026-04-05 00:02:18.322492 | orchestrator | + content_md5 = (known after apply) 2026-04-05 00:02:18.322495 | orchestrator | + content_sha1 = (known after apply) 2026-04-05 00:02:18.322499 | orchestrator | + content_sha256 = (known after apply) 2026-04-05 00:02:18.322503 | orchestrator | + content_sha512 = (known after apply) 2026-04-05 00:02:18.322507 | orchestrator | + directory_permission = "0777" 2026-04-05 00:02:18.322510 | orchestrator | + file_permission = "0644" 2026-04-05 00:02:18.322514 | orchestrator | + filename = "inventory.ci" 2026-04-05 00:02:18.322518 | orchestrator | + id = (known after apply) 2026-04-05 00:02:18.322521 | orchestrator | } 2026-04-05 00:02:18.322525 | orchestrator | 2026-04-05 00:02:18.322529 | orchestrator | # local_sensitive_file.id_rsa will be created 2026-04-05 00:02:18.322533 | orchestrator | + resource "local_sensitive_file" "id_rsa" { 2026-04-05 00:02:18.322536 | orchestrator | + content = (sensitive value) 2026-04-05 00:02:18.322540 | orchestrator | + content_base64sha256 = (known after apply) 2026-04-05 00:02:18.322544 | orchestrator | + content_base64sha512 = (known after apply) 2026-04-05 00:02:18.322547 | orchestrator | + content_md5 = (known after apply) 2026-04-05 00:02:18.322551 | orchestrator | + content_sha1 = (known after apply) 2026-04-05 00:02:18.322555 | orchestrator | + content_sha256 = (known after apply) 2026-04-05 00:02:18.322565 | orchestrator | + content_sha512 = (known after apply) 2026-04-05 00:02:18.322569 | orchestrator | + directory_permission = "0700" 2026-04-05 00:02:18.322572 | orchestrator | + file_permission = "0600" 2026-04-05 00:02:18.322576 | orchestrator | + filename = ".id_rsa.ci" 2026-04-05 00:02:18.322580 | orchestrator | + id = (known after apply) 2026-04-05 00:02:18.322583 | orchestrator | } 2026-04-05 00:02:18.322587 | orchestrator | 2026-04-05 00:02:18.322591 | orchestrator | # null_resource.node_semaphore will be created 2026-04-05 00:02:18.322595 | orchestrator | + resource "null_resource" "node_semaphore" { 2026-04-05 00:02:18.322598 | orchestrator | + id = (known after apply) 2026-04-05 00:02:18.322602 | orchestrator | } 2026-04-05 00:02:18.322606 | orchestrator | 2026-04-05 00:02:18.322609 | orchestrator | # openstack_blockstorage_volume_v3.manager_base_volume[0] will be created 2026-04-05 00:02:18.322613 | orchestrator | + resource "openstack_blockstorage_volume_v3" "manager_base_volume" { 2026-04-05 00:02:18.322617 | orchestrator | + attachment = (known after apply) 2026-04-05 00:02:18.322621 | orchestrator | + availability_zone = "nova" 2026-04-05 00:02:18.322624 | orchestrator | + id = (known after apply) 2026-04-05 00:02:18.322628 | orchestrator | + image_id = (known after apply) 2026-04-05 00:02:18.322632 | orchestrator | + metadata = (known after apply) 2026-04-05 00:02:18.322636 | orchestrator | + name = "testbed-volume-manager-base" 2026-04-05 00:02:18.322639 | orchestrator | + region = (known after apply) 2026-04-05 00:02:18.322643 | orchestrator | + size = 80 2026-04-05 00:02:18.322647 | orchestrator | + volume_retype_policy = "never" 2026-04-05 00:02:18.322650 | orchestrator | + volume_type = "ssd" 2026-04-05 00:02:18.322654 | orchestrator | } 2026-04-05 00:02:18.322658 | orchestrator | 2026-04-05 00:02:18.322662 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[0] will be created 2026-04-05 00:02:18.322665 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-04-05 00:02:18.322669 | orchestrator | + attachment = (known after apply) 2026-04-05 00:02:18.322673 | orchestrator | + availability_zone = "nova" 2026-04-05 00:02:18.322676 | orchestrator | + id = (known after apply) 2026-04-05 00:02:18.322683 | orchestrator | + image_id = (known after apply) 2026-04-05 00:02:18.322687 | orchestrator | + metadata = (known after apply) 2026-04-05 00:02:18.322691 | orchestrator | + name = "testbed-volume-0-node-base" 2026-04-05 00:02:18.322695 | orchestrator | + region = (known after apply) 2026-04-05 00:02:18.322698 | orchestrator | + size = 80 2026-04-05 00:02:18.322702 | orchestrator | + volume_retype_policy = "never" 2026-04-05 00:02:18.322706 | orchestrator | + volume_type = "ssd" 2026-04-05 00:02:18.322709 | orchestrator | } 2026-04-05 00:02:18.322713 | orchestrator | 2026-04-05 00:02:18.322717 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[1] will be created 2026-04-05 00:02:18.322721 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-04-05 00:02:18.322724 | orchestrator | + attachment = (known after apply) 2026-04-05 00:02:18.322728 | orchestrator | + availability_zone = "nova" 2026-04-05 00:02:18.322732 | orchestrator | + id = (known after apply) 2026-04-05 00:02:18.322736 | orchestrator | + image_id = (known after apply) 2026-04-05 00:02:18.322739 | orchestrator | + metadata = (known after apply) 2026-04-05 00:02:18.322743 | orchestrator | + name = "testbed-volume-1-node-base" 2026-04-05 00:02:18.322747 | orchestrator | + region = (known after apply) 2026-04-05 00:02:18.322750 | orchestrator | + size = 80 2026-04-05 00:02:18.322754 | orchestrator | + volume_retype_policy = "never" 2026-04-05 00:02:18.322758 | orchestrator | + volume_type = "ssd" 2026-04-05 00:02:18.322761 | orchestrator | } 2026-04-05 00:02:18.322765 | orchestrator | 2026-04-05 00:02:18.322769 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[2] will be created 2026-04-05 00:02:18.322772 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-04-05 00:02:18.322776 | orchestrator | + attachment = (known after apply) 2026-04-05 00:02:18.322780 | orchestrator | + availability_zone = "nova" 2026-04-05 00:02:18.322784 | orchestrator | + id = (known after apply) 2026-04-05 00:02:18.322787 | orchestrator | + image_id = (known after apply) 2026-04-05 00:02:18.322791 | orchestrator | + metadata = (known after apply) 2026-04-05 00:02:18.322795 | orchestrator | + name = "testbed-volume-2-node-base" 2026-04-05 00:02:18.322798 | orchestrator | + region = (known after apply) 2026-04-05 00:02:18.322802 | orchestrator | + size = 80 2026-04-05 00:02:18.322808 | orchestrator | + volume_retype_policy = "never" 2026-04-05 00:02:18.322812 | orchestrator | + volume_type = "ssd" 2026-04-05 00:02:18.322815 | orchestrator | } 2026-04-05 00:02:18.322819 | orchestrator | 2026-04-05 00:02:18.322823 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[3] will be created 2026-04-05 00:02:18.322827 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-04-05 00:02:18.322830 | orchestrator | + attachment = (known after apply) 2026-04-05 00:02:18.322834 | orchestrator | + availability_zone = "nova" 2026-04-05 00:02:18.322838 | orchestrator | + id = (known after apply) 2026-04-05 00:02:18.322841 | orchestrator | + image_id = (known after apply) 2026-04-05 00:02:18.322845 | orchestrator | + metadata = (known after apply) 2026-04-05 00:02:18.322849 | orchestrator | + name = "testbed-volume-3-node-base" 2026-04-05 00:02:18.322853 | orchestrator | + region = (known after apply) 2026-04-05 00:02:18.322856 | orchestrator | + size = 80 2026-04-05 00:02:18.322860 | orchestrator | + volume_retype_policy = "never" 2026-04-05 00:02:18.322864 | orchestrator | + volume_type = "ssd" 2026-04-05 00:02:18.322867 | orchestrator | } 2026-04-05 00:02:18.322871 | orchestrator | 2026-04-05 00:02:18.322875 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[4] will be created 2026-04-05 00:02:18.322879 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-04-05 00:02:18.322882 | orchestrator | + attachment = (known after apply) 2026-04-05 00:02:18.322924 | orchestrator | + availability_zone = "nova" 2026-04-05 00:02:18.322928 | orchestrator | + id = (known after apply) 2026-04-05 00:02:18.322935 | orchestrator | + image_id = (known after apply) 2026-04-05 00:02:18.322939 | orchestrator | + metadata = (known after apply) 2026-04-05 00:02:18.322942 | orchestrator | + name = "testbed-volume-4-node-base" 2026-04-05 00:02:18.322946 | orchestrator | + region = (known after apply) 2026-04-05 00:02:18.322950 | orchestrator | + size = 80 2026-04-05 00:02:18.322953 | orchestrator | + volume_retype_policy = "never" 2026-04-05 00:02:18.322957 | orchestrator | + volume_type = "ssd" 2026-04-05 00:02:18.322961 | orchestrator | } 2026-04-05 00:02:18.322965 | orchestrator | 2026-04-05 00:02:18.322968 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[5] will be created 2026-04-05 00:02:18.322975 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-04-05 00:02:18.322979 | orchestrator | + attachment = (known after apply) 2026-04-05 00:02:18.322983 | orchestrator | + availability_zone = "nova" 2026-04-05 00:02:18.322987 | orchestrator | + id = (known after apply) 2026-04-05 00:02:18.322990 | orchestrator | + image_id = (known after apply) 2026-04-05 00:02:18.322994 | orchestrator | + metadata = (known after apply) 2026-04-05 00:02:18.322998 | orchestrator | + name = "testbed-volume-5-node-base" 2026-04-05 00:02:18.323001 | orchestrator | + region = (known after apply) 2026-04-05 00:02:18.323005 | orchestrator | + size = 80 2026-04-05 00:02:18.323009 | orchestrator | + volume_retype_policy = "never" 2026-04-05 00:02:18.323012 | orchestrator | + volume_type = "ssd" 2026-04-05 00:02:18.323016 | orchestrator | } 2026-04-05 00:02:18.323020 | orchestrator | 2026-04-05 00:02:18.323023 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[0] will be created 2026-04-05 00:02:18.323027 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-04-05 00:02:18.323031 | orchestrator | + attachment = (known after apply) 2026-04-05 00:02:18.323035 | orchestrator | + availability_zone = "nova" 2026-04-05 00:02:18.323039 | orchestrator | + id = (known after apply) 2026-04-05 00:02:18.323042 | orchestrator | + metadata = (known after apply) 2026-04-05 00:02:18.323046 | orchestrator | + name = "testbed-volume-0-node-3" 2026-04-05 00:02:18.323050 | orchestrator | + region = (known after apply) 2026-04-05 00:02:18.323053 | orchestrator | + size = 20 2026-04-05 00:02:18.323057 | orchestrator | + volume_retype_policy = "never" 2026-04-05 00:02:18.323061 | orchestrator | + volume_type = "ssd" 2026-04-05 00:02:18.323065 | orchestrator | } 2026-04-05 00:02:18.323069 | orchestrator | 2026-04-05 00:02:18.323072 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[1] will be created 2026-04-05 00:02:18.323076 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-04-05 00:02:18.323080 | orchestrator | + attachment = (known after apply) 2026-04-05 00:02:18.323083 | orchestrator | + availability_zone = "nova" 2026-04-05 00:02:18.323087 | orchestrator | + id = (known after apply) 2026-04-05 00:02:18.323091 | orchestrator | + metadata = (known after apply) 2026-04-05 00:02:18.323095 | orchestrator | + name = "testbed-volume-1-node-4" 2026-04-05 00:02:18.323098 | orchestrator | + region = (known after apply) 2026-04-05 00:02:18.323102 | orchestrator | + size = 20 2026-04-05 00:02:18.323106 | orchestrator | + volume_retype_policy = "never" 2026-04-05 00:02:18.323109 | orchestrator | + volume_type = "ssd" 2026-04-05 00:02:18.323113 | orchestrator | } 2026-04-05 00:02:18.323117 | orchestrator | 2026-04-05 00:02:18.323121 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[2] will be created 2026-04-05 00:02:18.323124 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-04-05 00:02:18.323128 | orchestrator | + attachment = (known after apply) 2026-04-05 00:02:18.323132 | orchestrator | + availability_zone = "nova" 2026-04-05 00:02:18.323135 | orchestrator | + id = (known after apply) 2026-04-05 00:02:18.323139 | orchestrator | + metadata = (known after apply) 2026-04-05 00:02:18.323143 | orchestrator | + name = "testbed-volume-2-node-5" 2026-04-05 00:02:18.323147 | orchestrator | + region = (known after apply) 2026-04-05 00:02:18.323153 | orchestrator | + size = 20 2026-04-05 00:02:18.323157 | orchestrator | + volume_retype_policy = "never" 2026-04-05 00:02:18.323160 | orchestrator | + volume_type = "ssd" 2026-04-05 00:02:18.323164 | orchestrator | } 2026-04-05 00:02:18.323168 | orchestrator | 2026-04-05 00:02:18.323172 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[3] will be created 2026-04-05 00:02:18.323175 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-04-05 00:02:18.323179 | orchestrator | + attachment = (known after apply) 2026-04-05 00:02:18.323183 | orchestrator | + availability_zone = "nova" 2026-04-05 00:02:18.323186 | orchestrator | + id = (known after apply) 2026-04-05 00:02:18.323193 | orchestrator | + metadata = (known after apply) 2026-04-05 00:02:18.323196 | orchestrator | + name = "testbed-volume-3-node-3" 2026-04-05 00:02:18.323200 | orchestrator | + region = (known after apply) 2026-04-05 00:02:18.323204 | orchestrator | + size = 20 2026-04-05 00:02:18.323207 | orchestrator | + volume_retype_policy = "never" 2026-04-05 00:02:18.323211 | orchestrator | + volume_type = "ssd" 2026-04-05 00:02:18.323215 | orchestrator | } 2026-04-05 00:02:18.323219 | orchestrator | 2026-04-05 00:02:18.323222 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[4] will be created 2026-04-05 00:02:18.323226 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-04-05 00:02:18.323230 | orchestrator | + attachment = (known after apply) 2026-04-05 00:02:18.323234 | orchestrator | + availability_zone = "nova" 2026-04-05 00:02:18.323237 | orchestrator | + id = (known after apply) 2026-04-05 00:02:18.323241 | orchestrator | + metadata = (known after apply) 2026-04-05 00:02:18.323245 | orchestrator | + name = "testbed-volume-4-node-4" 2026-04-05 00:02:18.323248 | orchestrator | + region = (known after apply) 2026-04-05 00:02:18.323252 | orchestrator | + size = 20 2026-04-05 00:02:18.323256 | orchestrator | + volume_retype_policy = "never" 2026-04-05 00:02:18.323260 | orchestrator | + volume_type = "ssd" 2026-04-05 00:02:18.323263 | orchestrator | } 2026-04-05 00:02:18.323267 | orchestrator | 2026-04-05 00:02:18.323271 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[5] will be created 2026-04-05 00:02:18.323274 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-04-05 00:02:18.323278 | orchestrator | + attachment = (known after apply) 2026-04-05 00:02:18.323282 | orchestrator | + availability_zone = "nova" 2026-04-05 00:02:18.323286 | orchestrator | + id = (known after apply) 2026-04-05 00:02:18.323289 | orchestrator | + metadata = (known after apply) 2026-04-05 00:02:18.323293 | orchestrator | + name = "testbed-volume-5-node-5" 2026-04-05 00:02:18.323297 | orchestrator | + region = (known after apply) 2026-04-05 00:02:18.323300 | orchestrator | + size = 20 2026-04-05 00:02:18.323304 | orchestrator | + volume_retype_policy = "never" 2026-04-05 00:02:18.323308 | orchestrator | + volume_type = "ssd" 2026-04-05 00:02:18.323311 | orchestrator | } 2026-04-05 00:02:18.323315 | orchestrator | 2026-04-05 00:02:18.323319 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[6] will be created 2026-04-05 00:02:18.323323 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-04-05 00:02:18.323326 | orchestrator | + attachment = (known after apply) 2026-04-05 00:02:18.323330 | orchestrator | + availability_zone = "nova" 2026-04-05 00:02:18.323334 | orchestrator | + id = (known after apply) 2026-04-05 00:02:18.323341 | orchestrator | + metadata = (known after apply) 2026-04-05 00:02:18.323345 | orchestrator | + name = "testbed-volume-6-node-3" 2026-04-05 00:02:18.323349 | orchestrator | + region = (known after apply) 2026-04-05 00:02:18.323352 | orchestrator | + size = 20 2026-04-05 00:02:18.323356 | orchestrator | + volume_retype_policy = "never" 2026-04-05 00:02:18.323360 | orchestrator | + volume_type = "ssd" 2026-04-05 00:02:18.323363 | orchestrator | } 2026-04-05 00:02:18.323367 | orchestrator | 2026-04-05 00:02:18.323371 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[7] will be created 2026-04-05 00:02:18.323375 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-04-05 00:02:18.323381 | orchestrator | + attachment = (known after apply) 2026-04-05 00:02:18.323385 | orchestrator | + availability_zone = "nova" 2026-04-05 00:02:18.323389 | orchestrator | + id = (known after apply) 2026-04-05 00:02:18.323393 | orchestrator | + metadata = (known after apply) 2026-04-05 00:02:18.323396 | orchestrator | + name = "testbed-volume-7-node-4" 2026-04-05 00:02:18.323400 | orchestrator | + region = (known after apply) 2026-04-05 00:02:18.323404 | orchestrator | + size = 20 2026-04-05 00:02:18.323407 | orchestrator | + volume_retype_policy = "never" 2026-04-05 00:02:18.323411 | orchestrator | + volume_type = "ssd" 2026-04-05 00:02:18.323415 | orchestrator | } 2026-04-05 00:02:18.323419 | orchestrator | 2026-04-05 00:02:18.323422 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[8] will be created 2026-04-05 00:02:18.323426 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-04-05 00:02:18.323430 | orchestrator | + attachment = (known after apply) 2026-04-05 00:02:18.323434 | orchestrator | + availability_zone = "nova" 2026-04-05 00:02:18.323437 | orchestrator | + id = (known after apply) 2026-04-05 00:02:18.323441 | orchestrator | + metadata = (known after apply) 2026-04-05 00:02:18.323445 | orchestrator | + name = "testbed-volume-8-node-5" 2026-04-05 00:02:18.323449 | orchestrator | + region = (known after apply) 2026-04-05 00:02:18.323452 | orchestrator | + size = 20 2026-04-05 00:02:18.323456 | orchestrator | + volume_retype_policy = "never" 2026-04-05 00:02:18.323460 | orchestrator | + volume_type = "ssd" 2026-04-05 00:02:18.323463 | orchestrator | } 2026-04-05 00:02:18.323467 | orchestrator | 2026-04-05 00:02:18.323471 | orchestrator | # openstack_compute_instance_v2.manager_server will be created 2026-04-05 00:02:18.323475 | orchestrator | + resource "openstack_compute_instance_v2" "manager_server" { 2026-04-05 00:02:18.323478 | orchestrator | + access_ip_v4 = (known after apply) 2026-04-05 00:02:18.323482 | orchestrator | + access_ip_v6 = (known after apply) 2026-04-05 00:02:18.323486 | orchestrator | + all_metadata = (known after apply) 2026-04-05 00:02:18.323489 | orchestrator | + all_tags = (known after apply) 2026-04-05 00:02:18.323493 | orchestrator | + availability_zone = "nova" 2026-04-05 00:02:18.323497 | orchestrator | + config_drive = true 2026-04-05 00:02:18.323503 | orchestrator | + created = (known after apply) 2026-04-05 00:02:18.323506 | orchestrator | + flavor_id = (known after apply) 2026-04-05 00:02:18.323510 | orchestrator | + flavor_name = "OSISM-4V-16" 2026-04-05 00:02:18.323514 | orchestrator | + force_delete = false 2026-04-05 00:02:18.323517 | orchestrator | + hypervisor_hostname = (known after apply) 2026-04-05 00:02:18.323521 | orchestrator | + id = (known after apply) 2026-04-05 00:02:18.323525 | orchestrator | + image_id = (known after apply) 2026-04-05 00:02:18.323529 | orchestrator | + image_name = (known after apply) 2026-04-05 00:02:18.323532 | orchestrator | + key_pair = "testbed" 2026-04-05 00:02:18.323536 | orchestrator | + name = "testbed-manager" 2026-04-05 00:02:18.323540 | orchestrator | + power_state = "active" 2026-04-05 00:02:18.323543 | orchestrator | + region = (known after apply) 2026-04-05 00:02:18.323547 | orchestrator | + security_groups = (known after apply) 2026-04-05 00:02:18.323551 | orchestrator | + stop_before_destroy = false 2026-04-05 00:02:18.323554 | orchestrator | + updated = (known after apply) 2026-04-05 00:02:18.323558 | orchestrator | + user_data = (sensitive value) 2026-04-05 00:02:18.323562 | orchestrator | 2026-04-05 00:02:18.323565 | orchestrator | + block_device { 2026-04-05 00:02:18.323569 | orchestrator | + boot_index = 0 2026-04-05 00:02:18.323573 | orchestrator | + delete_on_termination = false 2026-04-05 00:02:18.323577 | orchestrator | + destination_type = "volume" 2026-04-05 00:02:18.323580 | orchestrator | + multiattach = false 2026-04-05 00:02:18.323584 | orchestrator | + source_type = "volume" 2026-04-05 00:02:18.323588 | orchestrator | + uuid = (known after apply) 2026-04-05 00:02:18.323595 | orchestrator | } 2026-04-05 00:02:18.323599 | orchestrator | 2026-04-05 00:02:18.323602 | orchestrator | + network { 2026-04-05 00:02:18.323606 | orchestrator | + access_network = false 2026-04-05 00:02:18.323610 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-04-05 00:02:18.323613 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-04-05 00:02:18.323617 | orchestrator | + mac = (known after apply) 2026-04-05 00:02:18.323621 | orchestrator | + name = (known after apply) 2026-04-05 00:02:18.323624 | orchestrator | + port = (known after apply) 2026-04-05 00:02:18.323628 | orchestrator | + uuid = (known after apply) 2026-04-05 00:02:18.323632 | orchestrator | } 2026-04-05 00:02:18.323636 | orchestrator | } 2026-04-05 00:02:18.323639 | orchestrator | 2026-04-05 00:02:18.323643 | orchestrator | # openstack_compute_instance_v2.node_server[0] will be created 2026-04-05 00:02:18.323647 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-04-05 00:02:18.323651 | orchestrator | + access_ip_v4 = (known after apply) 2026-04-05 00:02:18.323654 | orchestrator | + access_ip_v6 = (known after apply) 2026-04-05 00:02:18.323658 | orchestrator | + all_metadata = (known after apply) 2026-04-05 00:02:18.323662 | orchestrator | + all_tags = (known after apply) 2026-04-05 00:02:18.323665 | orchestrator | + availability_zone = "nova" 2026-04-05 00:02:18.323669 | orchestrator | + config_drive = true 2026-04-05 00:02:18.323673 | orchestrator | + created = (known after apply) 2026-04-05 00:02:18.323676 | orchestrator | + flavor_id = (known after apply) 2026-04-05 00:02:18.323680 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-04-05 00:02:18.323684 | orchestrator | + force_delete = false 2026-04-05 00:02:18.323687 | orchestrator | + hypervisor_hostname = (known after apply) 2026-04-05 00:02:18.323691 | orchestrator | + id = (known after apply) 2026-04-05 00:02:18.323695 | orchestrator | + image_id = (known after apply) 2026-04-05 00:02:18.323699 | orchestrator | + image_name = (known after apply) 2026-04-05 00:02:18.323702 | orchestrator | + key_pair = "testbed" 2026-04-05 00:02:18.323706 | orchestrator | + name = "testbed-node-0" 2026-04-05 00:02:18.323710 | orchestrator | + power_state = "active" 2026-04-05 00:02:18.323716 | orchestrator | + region = (known after apply) 2026-04-05 00:02:18.323719 | orchestrator | + security_groups = (known after apply) 2026-04-05 00:02:18.323723 | orchestrator | + stop_before_destroy = false 2026-04-05 00:02:18.323727 | orchestrator | + updated = (known after apply) 2026-04-05 00:02:18.323731 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-04-05 00:02:18.323734 | orchestrator | 2026-04-05 00:02:18.323738 | orchestrator | + block_device { 2026-04-05 00:02:18.323742 | orchestrator | + boot_index = 0 2026-04-05 00:02:18.323746 | orchestrator | + delete_on_termination = false 2026-04-05 00:02:18.323749 | orchestrator | + destination_type = "volume" 2026-04-05 00:02:18.323753 | orchestrator | + multiattach = false 2026-04-05 00:02:18.323757 | orchestrator | + source_type = "volume" 2026-04-05 00:02:18.323760 | orchestrator | + uuid = (known after apply) 2026-04-05 00:02:18.323764 | orchestrator | } 2026-04-05 00:02:18.323768 | orchestrator | 2026-04-05 00:02:18.323771 | orchestrator | + network { 2026-04-05 00:02:18.323775 | orchestrator | + access_network = false 2026-04-05 00:02:18.323779 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-04-05 00:02:18.323783 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-04-05 00:02:18.323786 | orchestrator | + mac = (known after apply) 2026-04-05 00:02:18.323790 | orchestrator | + name = (known after apply) 2026-04-05 00:02:18.323794 | orchestrator | + port = (known after apply) 2026-04-05 00:02:18.323797 | orchestrator | + uuid = (known after apply) 2026-04-05 00:02:18.323801 | orchestrator | } 2026-04-05 00:02:18.323805 | orchestrator | } 2026-04-05 00:02:18.323808 | orchestrator | 2026-04-05 00:02:18.323812 | orchestrator | # openstack_compute_instance_v2.node_server[1] will be created 2026-04-05 00:02:18.323816 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-04-05 00:02:18.323820 | orchestrator | + access_ip_v4 = (known after apply) 2026-04-05 00:02:18.323826 | orchestrator | + access_ip_v6 = (known after apply) 2026-04-05 00:02:18.323830 | orchestrator | + all_metadata = (known after apply) 2026-04-05 00:02:18.323834 | orchestrator | + all_tags = (known after apply) 2026-04-05 00:02:18.323837 | orchestrator | + availability_zone = "nova" 2026-04-05 00:02:18.323841 | orchestrator | + config_drive = true 2026-04-05 00:02:18.323845 | orchestrator | + created = (known after apply) 2026-04-05 00:02:18.323848 | orchestrator | + flavor_id = (known after apply) 2026-04-05 00:02:18.323852 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-04-05 00:02:18.323856 | orchestrator | + force_delete = false 2026-04-05 00:02:18.323859 | orchestrator | + hypervisor_hostname = (known after apply) 2026-04-05 00:02:18.323863 | orchestrator | + id = (known after apply) 2026-04-05 00:02:18.323867 | orchestrator | + image_id = (known after apply) 2026-04-05 00:02:18.323870 | orchestrator | + image_name = (known after apply) 2026-04-05 00:02:18.323874 | orchestrator | + key_pair = "testbed" 2026-04-05 00:02:18.323878 | orchestrator | + name = "testbed-node-1" 2026-04-05 00:02:18.323881 | orchestrator | + power_state = "active" 2026-04-05 00:02:18.323896 | orchestrator | + region = (known after apply) 2026-04-05 00:02:18.323902 | orchestrator | + security_groups = (known after apply) 2026-04-05 00:02:18.323909 | orchestrator | + stop_before_destroy = false 2026-04-05 00:02:18.323915 | orchestrator | + updated = (known after apply) 2026-04-05 00:02:18.323924 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-04-05 00:02:18.323930 | orchestrator | 2026-04-05 00:02:18.323935 | orchestrator | + block_device { 2026-04-05 00:02:18.323939 | orchestrator | + boot_index = 0 2026-04-05 00:02:18.323943 | orchestrator | + delete_on_termination = false 2026-04-05 00:02:18.323947 | orchestrator | + destination_type = "volume" 2026-04-05 00:02:18.323950 | orchestrator | + multiattach = false 2026-04-05 00:02:18.323954 | orchestrator | + source_type = "volume" 2026-04-05 00:02:18.323958 | orchestrator | + uuid = (known after apply) 2026-04-05 00:02:18.323961 | orchestrator | } 2026-04-05 00:02:18.323965 | orchestrator | 2026-04-05 00:02:18.323969 | orchestrator | + network { 2026-04-05 00:02:18.323973 | orchestrator | + access_network = false 2026-04-05 00:02:18.323976 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-04-05 00:02:18.323980 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-04-05 00:02:18.323984 | orchestrator | + mac = (known after apply) 2026-04-05 00:02:18.323987 | orchestrator | + name = (known after apply) 2026-04-05 00:02:18.323991 | orchestrator | + port = (known after apply) 2026-04-05 00:02:18.323995 | orchestrator | + uuid = (known after apply) 2026-04-05 00:02:18.323998 | orchestrator | } 2026-04-05 00:02:18.324002 | orchestrator | } 2026-04-05 00:02:18.324006 | orchestrator | 2026-04-05 00:02:18.324009 | orchestrator | # openstack_compute_instance_v2.node_server[2] will be created 2026-04-05 00:02:18.324013 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-04-05 00:02:18.324017 | orchestrator | + access_ip_v4 = (known after apply) 2026-04-05 00:02:18.324020 | orchestrator | + access_ip_v6 = (known after apply) 2026-04-05 00:02:18.324024 | orchestrator | + all_metadata = (known after apply) 2026-04-05 00:02:18.324028 | orchestrator | + all_tags = (known after apply) 2026-04-05 00:02:18.324032 | orchestrator | + availability_zone = "nova" 2026-04-05 00:02:18.324035 | orchestrator | + config_drive = true 2026-04-05 00:02:18.324039 | orchestrator | + created = (known after apply) 2026-04-05 00:02:18.324043 | orchestrator | + flavor_id = (known after apply) 2026-04-05 00:02:18.324047 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-04-05 00:02:18.324050 | orchestrator | + force_delete = false 2026-04-05 00:02:18.324054 | orchestrator | + hypervisor_hostname = (known after apply) 2026-04-05 00:02:18.324058 | orchestrator | + id = (known after apply) 2026-04-05 00:02:18.324061 | orchestrator | + image_id = (known after apply) 2026-04-05 00:02:18.324069 | orchestrator | + image_name = (known after apply) 2026-04-05 00:02:18.324072 | orchestrator | + key_pair = "testbed" 2026-04-05 00:02:18.324076 | orchestrator | + name = "testbed-node-2" 2026-04-05 00:02:18.324080 | orchestrator | + power_state = "active" 2026-04-05 00:02:18.324083 | orchestrator | + region = (known after apply) 2026-04-05 00:02:18.324087 | orchestrator | + security_groups = (known after apply) 2026-04-05 00:02:18.324091 | orchestrator | + stop_before_destroy = false 2026-04-05 00:02:18.324094 | orchestrator | + updated = (known after apply) 2026-04-05 00:02:18.324098 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-04-05 00:02:18.324102 | orchestrator | 2026-04-05 00:02:18.324105 | orchestrator | + block_device { 2026-04-05 00:02:18.324109 | orchestrator | + boot_index = 0 2026-04-05 00:02:18.324113 | orchestrator | + delete_on_termination = false 2026-04-05 00:02:18.324117 | orchestrator | + destination_type = "volume" 2026-04-05 00:02:18.324123 | orchestrator | + multiattach = false 2026-04-05 00:02:18.324127 | orchestrator | + source_type = "volume" 2026-04-05 00:02:18.324131 | orchestrator | + uuid = (known after apply) 2026-04-05 00:02:18.324134 | orchestrator | } 2026-04-05 00:02:18.324138 | orchestrator | 2026-04-05 00:02:18.324142 | orchestrator | + network { 2026-04-05 00:02:18.324145 | orchestrator | + access_network = false 2026-04-05 00:02:18.324149 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-04-05 00:02:18.324153 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-04-05 00:02:18.324157 | orchestrator | + mac = (known after apply) 2026-04-05 00:02:18.324160 | orchestrator | + name = (known after apply) 2026-04-05 00:02:18.324164 | orchestrator | + port = (known after apply) 2026-04-05 00:02:18.324168 | orchestrator | + uuid = (known after apply) 2026-04-05 00:02:18.324171 | orchestrator | } 2026-04-05 00:02:18.324175 | orchestrator | } 2026-04-05 00:02:18.324179 | orchestrator | 2026-04-05 00:02:18.324184 | orchestrator | # openstack_compute_instance_v2.node_server[3] will be created 2026-04-05 00:02:18.324188 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-04-05 00:02:18.324192 | orchestrator | + access_ip_v4 = (known after apply) 2026-04-05 00:02:18.324196 | orchestrator | + access_ip_v6 = (known after apply) 2026-04-05 00:02:18.324199 | orchestrator | + all_metadata = (known after apply) 2026-04-05 00:02:18.324203 | orchestrator | + all_tags = (known after apply) 2026-04-05 00:02:18.324207 | orchestrator | + availability_zone = "nova" 2026-04-05 00:02:18.324210 | orchestrator | + config_drive = true 2026-04-05 00:02:18.324214 | orchestrator | + created = (known after apply) 2026-04-05 00:02:18.324218 | orchestrator | + flavor_id = (known after apply) 2026-04-05 00:02:18.324221 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-04-05 00:02:18.324225 | orchestrator | + force_delete = false 2026-04-05 00:02:18.324229 | orchestrator | + hypervisor_hostname = (known after apply) 2026-04-05 00:02:18.324233 | orchestrator | + id = (known after apply) 2026-04-05 00:02:18.324236 | orchestrator | + image_id = (known after apply) 2026-04-05 00:02:18.324240 | orchestrator | + image_name = (known after apply) 2026-04-05 00:02:18.324244 | orchestrator | + key_pair = "testbed" 2026-04-05 00:02:18.324247 | orchestrator | + name = "testbed-node-3" 2026-04-05 00:02:18.324251 | orchestrator | + power_state = "active" 2026-04-05 00:02:18.324255 | orchestrator | + region = (known after apply) 2026-04-05 00:02:18.324258 | orchestrator | + security_groups = (known after apply) 2026-04-05 00:02:18.324262 | orchestrator | + stop_before_destroy = false 2026-04-05 00:02:18.324266 | orchestrator | + updated = (known after apply) 2026-04-05 00:02:18.324269 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-04-05 00:02:18.324273 | orchestrator | 2026-04-05 00:02:18.324277 | orchestrator | + block_device { 2026-04-05 00:02:18.324280 | orchestrator | + boot_index = 0 2026-04-05 00:02:18.324284 | orchestrator | + delete_on_termination = false 2026-04-05 00:02:18.324288 | orchestrator | + destination_type = "volume" 2026-04-05 00:02:18.324294 | orchestrator | + multiattach = false 2026-04-05 00:02:18.324298 | orchestrator | + source_type = "volume" 2026-04-05 00:02:18.324302 | orchestrator | + uuid = (known after apply) 2026-04-05 00:02:18.324306 | orchestrator | } 2026-04-05 00:02:18.324309 | orchestrator | 2026-04-05 00:02:18.324313 | orchestrator | + network { 2026-04-05 00:02:18.324317 | orchestrator | + access_network = false 2026-04-05 00:02:18.324320 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-04-05 00:02:18.324324 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-04-05 00:02:18.324328 | orchestrator | + mac = (known after apply) 2026-04-05 00:02:18.324331 | orchestrator | + name = (known after apply) 2026-04-05 00:02:18.324335 | orchestrator | + port = (known after apply) 2026-04-05 00:02:18.324338 | orchestrator | + uuid = (known after apply) 2026-04-05 00:02:18.324342 | orchestrator | } 2026-04-05 00:02:18.324346 | orchestrator | } 2026-04-05 00:02:18.324350 | orchestrator | 2026-04-05 00:02:18.324353 | orchestrator | # openstack_compute_instance_v2.node_server[4] will be created 2026-04-05 00:02:18.324357 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-04-05 00:02:18.324361 | orchestrator | + access_ip_v4 = (known after apply) 2026-04-05 00:02:18.324364 | orchestrator | + access_ip_v6 = (known after apply) 2026-04-05 00:02:18.324368 | orchestrator | + all_metadata = (known after apply) 2026-04-05 00:02:18.324372 | orchestrator | + all_tags = (known after apply) 2026-04-05 00:02:18.324375 | orchestrator | + availability_zone = "nova" 2026-04-05 00:02:18.324379 | orchestrator | + config_drive = true 2026-04-05 00:02:18.324383 | orchestrator | + created = (known after apply) 2026-04-05 00:02:18.324386 | orchestrator | + flavor_id = (known after apply) 2026-04-05 00:02:18.324390 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-04-05 00:02:18.324394 | orchestrator | + force_delete = false 2026-04-05 00:02:18.324397 | orchestrator | + hypervisor_hostname = (known after apply) 2026-04-05 00:02:18.324401 | orchestrator | + id = (known after apply) 2026-04-05 00:02:18.324404 | orchestrator | + image_id = (known after apply) 2026-04-05 00:02:18.324408 | orchestrator | + image_name = (known after apply) 2026-04-05 00:02:18.324412 | orchestrator | + key_pair = "testbed" 2026-04-05 00:02:18.324415 | orchestrator | + name = "testbed-node-4" 2026-04-05 00:02:18.324419 | orchestrator | + power_state = "active" 2026-04-05 00:02:18.324423 | orchestrator | + region = (known after apply) 2026-04-05 00:02:18.324426 | orchestrator | + security_groups = (known after apply) 2026-04-05 00:02:18.324430 | orchestrator | + stop_before_destroy = false 2026-04-05 00:02:18.324434 | orchestrator | + updated = (known after apply) 2026-04-05 00:02:18.324437 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-04-05 00:02:18.324441 | orchestrator | 2026-04-05 00:02:18.324445 | orchestrator | + block_device { 2026-04-05 00:02:18.324448 | orchestrator | + boot_index = 0 2026-04-05 00:02:18.324452 | orchestrator | + delete_on_termination = false 2026-04-05 00:02:18.324456 | orchestrator | + destination_type = "volume" 2026-04-05 00:02:18.324459 | orchestrator | + multiattach = false 2026-04-05 00:02:18.324463 | orchestrator | + source_type = "volume" 2026-04-05 00:02:18.324467 | orchestrator | + uuid = (known after apply) 2026-04-05 00:02:18.324470 | orchestrator | } 2026-04-05 00:02:18.324474 | orchestrator | 2026-04-05 00:02:18.324478 | orchestrator | + network { 2026-04-05 00:02:18.324481 | orchestrator | + access_network = false 2026-04-05 00:02:18.324485 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-04-05 00:02:18.324489 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-04-05 00:02:18.324492 | orchestrator | + mac = (known after apply) 2026-04-05 00:02:18.324496 | orchestrator | + name = (known after apply) 2026-04-05 00:02:18.324500 | orchestrator | + port = (known after apply) 2026-04-05 00:02:18.324505 | orchestrator | + uuid = (known after apply) 2026-04-05 00:02:18.324509 | orchestrator | } 2026-04-05 00:02:18.324513 | orchestrator | } 2026-04-05 00:02:18.324519 | orchestrator | 2026-04-05 00:02:18.324523 | orchestrator | # openstack_compute_instance_v2.node_server[5] will be created 2026-04-05 00:02:18.324526 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-04-05 00:02:18.324530 | orchestrator | + access_ip_v4 = (known after apply) 2026-04-05 00:02:18.324534 | orchestrator | + access_ip_v6 = (known after apply) 2026-04-05 00:02:18.324537 | orchestrator | + all_metadata = (known after apply) 2026-04-05 00:02:18.324541 | orchestrator | + all_tags = (known after apply) 2026-04-05 00:02:18.324544 | orchestrator | + availability_zone = "nova" 2026-04-05 00:02:18.324548 | orchestrator | + config_drive = true 2026-04-05 00:02:18.324552 | orchestrator | + created = (known after apply) 2026-04-05 00:02:18.324555 | orchestrator | + flavor_id = (known after apply) 2026-04-05 00:02:18.324559 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-04-05 00:02:18.324563 | orchestrator | + force_delete = false 2026-04-05 00:02:18.324566 | orchestrator | + hypervisor_hostname = (known after apply) 2026-04-05 00:02:18.324570 | orchestrator | + id = (known after apply) 2026-04-05 00:02:18.324574 | orchestrator | + image_id = (known after apply) 2026-04-05 00:02:18.324577 | orchestrator | + image_name = (known after apply) 2026-04-05 00:02:18.324581 | orchestrator | + key_pair = "testbed" 2026-04-05 00:02:18.324585 | orchestrator | + name = "testbed-node-5" 2026-04-05 00:02:18.324588 | orchestrator | + power_state = "active" 2026-04-05 00:02:18.324592 | orchestrator | + region = (known after apply) 2026-04-05 00:02:18.324596 | orchestrator | + security_groups = (known after apply) 2026-04-05 00:02:18.324599 | orchestrator | + stop_before_destroy = false 2026-04-05 00:02:18.324603 | orchestrator | + updated = (known after apply) 2026-04-05 00:02:18.324606 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-04-05 00:02:18.324610 | orchestrator | 2026-04-05 00:02:18.324614 | orchestrator | + block_device { 2026-04-05 00:02:18.324617 | orchestrator | + boot_index = 0 2026-04-05 00:02:18.324621 | orchestrator | + delete_on_termination = false 2026-04-05 00:02:18.324625 | orchestrator | + destination_type = "volume" 2026-04-05 00:02:18.324628 | orchestrator | + multiattach = false 2026-04-05 00:02:18.324632 | orchestrator | + source_type = "volume" 2026-04-05 00:02:18.324636 | orchestrator | + uuid = (known after apply) 2026-04-05 00:02:18.324639 | orchestrator | } 2026-04-05 00:02:18.324643 | orchestrator | 2026-04-05 00:02:18.324647 | orchestrator | + network { 2026-04-05 00:02:18.324650 | orchestrator | + access_network = false 2026-04-05 00:02:18.324654 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-04-05 00:02:18.324658 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-04-05 00:02:18.324661 | orchestrator | + mac = (known after apply) 2026-04-05 00:02:18.324665 | orchestrator | + name = (known after apply) 2026-04-05 00:02:18.324668 | orchestrator | + port = (known after apply) 2026-04-05 00:02:18.324672 | orchestrator | + uuid = (known after apply) 2026-04-05 00:02:18.324676 | orchestrator | } 2026-04-05 00:02:18.324679 | orchestrator | } 2026-04-05 00:02:18.324683 | orchestrator | 2026-04-05 00:02:18.324687 | orchestrator | # openstack_compute_keypair_v2.key will be created 2026-04-05 00:02:18.324690 | orchestrator | + resource "openstack_compute_keypair_v2" "key" { 2026-04-05 00:02:18.324694 | orchestrator | + fingerprint = (known after apply) 2026-04-05 00:02:18.324698 | orchestrator | + id = (known after apply) 2026-04-05 00:02:18.324701 | orchestrator | + name = "testbed" 2026-04-05 00:02:18.324705 | orchestrator | + private_key = (sensitive value) 2026-04-05 00:02:18.324708 | orchestrator | + public_key = (known after apply) 2026-04-05 00:02:18.324712 | orchestrator | + region = (known after apply) 2026-04-05 00:02:18.324716 | orchestrator | + user_id = (known after apply) 2026-04-05 00:02:18.324719 | orchestrator | } 2026-04-05 00:02:18.324723 | orchestrator | 2026-04-05 00:02:18.324727 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[0] will be created 2026-04-05 00:02:18.324730 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-04-05 00:02:18.324738 | orchestrator | + device = (known after apply) 2026-04-05 00:02:18.324742 | orchestrator | + id = (known after apply) 2026-04-05 00:02:18.324745 | orchestrator | + instance_id = (known after apply) 2026-04-05 00:02:18.324749 | orchestrator | + region = (known after apply) 2026-04-05 00:02:18.324755 | orchestrator | + volume_id = (known after apply) 2026-04-05 00:02:18.324758 | orchestrator | } 2026-04-05 00:02:18.324762 | orchestrator | 2026-04-05 00:02:18.324766 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[1] will be created 2026-04-05 00:02:18.324770 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-04-05 00:02:18.324773 | orchestrator | + device = (known after apply) 2026-04-05 00:02:18.324777 | orchestrator | + id = (known after apply) 2026-04-05 00:02:18.324780 | orchestrator | + instance_id = (known after apply) 2026-04-05 00:02:18.324784 | orchestrator | + region = (known after apply) 2026-04-05 00:02:18.324788 | orchestrator | + volume_id = (known after apply) 2026-04-05 00:02:18.324791 | orchestrator | } 2026-04-05 00:02:18.324795 | orchestrator | 2026-04-05 00:02:18.324799 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[2] will be created 2026-04-05 00:02:18.324802 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-04-05 00:02:18.324806 | orchestrator | + device = (known after apply) 2026-04-05 00:02:18.324810 | orchestrator | + id = (known after apply) 2026-04-05 00:02:18.324813 | orchestrator | + instance_id = (known after apply) 2026-04-05 00:02:18.324817 | orchestrator | + region = (known after apply) 2026-04-05 00:02:18.324821 | orchestrator | + volume_id = (known after apply) 2026-04-05 00:02:18.324824 | orchestrator | } 2026-04-05 00:02:18.324828 | orchestrator | 2026-04-05 00:02:18.324832 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[3] will be created 2026-04-05 00:02:18.324835 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-04-05 00:02:18.324839 | orchestrator | + device = (known after apply) 2026-04-05 00:02:18.324843 | orchestrator | + id = (known after apply) 2026-04-05 00:02:18.324846 | orchestrator | + instance_id = (known after apply) 2026-04-05 00:02:18.324850 | orchestrator | + region = (known after apply) 2026-04-05 00:02:18.324853 | orchestrator | + volume_id = (known after apply) 2026-04-05 00:02:18.324857 | orchestrator | } 2026-04-05 00:02:18.324861 | orchestrator | 2026-04-05 00:02:18.324864 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[4] will be created 2026-04-05 00:02:18.324868 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-04-05 00:02:18.324872 | orchestrator | + device = (known after apply) 2026-04-05 00:02:18.324876 | orchestrator | + id = (known after apply) 2026-04-05 00:02:18.324879 | orchestrator | + instance_id = (known after apply) 2026-04-05 00:02:18.324883 | orchestrator | + region = (known after apply) 2026-04-05 00:02:18.324912 | orchestrator | + volume_id = (known after apply) 2026-04-05 00:02:18.324916 | orchestrator | } 2026-04-05 00:02:18.324920 | orchestrator | 2026-04-05 00:02:18.324924 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[5] will be created 2026-04-05 00:02:18.324928 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-04-05 00:02:18.324931 | orchestrator | + device = (known after apply) 2026-04-05 00:02:18.324935 | orchestrator | + id = (known after apply) 2026-04-05 00:02:18.324939 | orchestrator | + instance_id = (known after apply) 2026-04-05 00:02:18.324942 | orchestrator | + region = (known after apply) 2026-04-05 00:02:18.324946 | orchestrator | + volume_id = (known after apply) 2026-04-05 00:02:18.324950 | orchestrator | } 2026-04-05 00:02:18.324953 | orchestrator | 2026-04-05 00:02:18.324957 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[6] will be created 2026-04-05 00:02:18.324961 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-04-05 00:02:18.324964 | orchestrator | + device = (known after apply) 2026-04-05 00:02:18.324968 | orchestrator | + id = (known after apply) 2026-04-05 00:02:18.324972 | orchestrator | + instance_id = (known after apply) 2026-04-05 00:02:18.324975 | orchestrator | + region = (known after apply) 2026-04-05 00:02:18.324982 | orchestrator | + volume_id = (known after apply) 2026-04-05 00:02:18.324985 | orchestrator | } 2026-04-05 00:02:18.324989 | orchestrator | 2026-04-05 00:02:18.324993 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[7] will be created 2026-04-05 00:02:18.324997 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-04-05 00:02:18.325000 | orchestrator | + device = (known after apply) 2026-04-05 00:02:18.325004 | orchestrator | + id = (known after apply) 2026-04-05 00:02:18.325008 | orchestrator | + instance_id = (known after apply) 2026-04-05 00:02:18.325011 | orchestrator | + region = (known after apply) 2026-04-05 00:02:18.325015 | orchestrator | + volume_id = (known after apply) 2026-04-05 00:02:18.325019 | orchestrator | } 2026-04-05 00:02:18.325023 | orchestrator | 2026-04-05 00:02:18.325026 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[8] will be created 2026-04-05 00:02:18.325030 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-04-05 00:02:18.325034 | orchestrator | + device = (known after apply) 2026-04-05 00:02:18.325037 | orchestrator | + id = (known after apply) 2026-04-05 00:02:18.325041 | orchestrator | + instance_id = (known after apply) 2026-04-05 00:02:18.325045 | orchestrator | + region = (known after apply) 2026-04-05 00:02:18.325048 | orchestrator | + volume_id = (known after apply) 2026-04-05 00:02:18.325052 | orchestrator | } 2026-04-05 00:02:18.325056 | orchestrator | 2026-04-05 00:02:18.325059 | orchestrator | # openstack_networking_floatingip_associate_v2.manager_floating_ip_association will be created 2026-04-05 00:02:18.325064 | orchestrator | + resource "openstack_networking_floatingip_associate_v2" "manager_floating_ip_association" { 2026-04-05 00:02:18.325067 | orchestrator | + fixed_ip = (known after apply) 2026-04-05 00:02:18.325071 | orchestrator | + floating_ip = (known after apply) 2026-04-05 00:02:18.325075 | orchestrator | + id = (known after apply) 2026-04-05 00:02:18.325078 | orchestrator | + port_id = (known after apply) 2026-04-05 00:02:18.325082 | orchestrator | + region = (known after apply) 2026-04-05 00:02:18.325086 | orchestrator | } 2026-04-05 00:02:18.325090 | orchestrator | 2026-04-05 00:02:18.325093 | orchestrator | # openstack_networking_floatingip_v2.manager_floating_ip will be created 2026-04-05 00:02:18.325097 | orchestrator | + resource "openstack_networking_floatingip_v2" "manager_floating_ip" { 2026-04-05 00:02:18.325101 | orchestrator | + address = (known after apply) 2026-04-05 00:02:18.325104 | orchestrator | + all_tags = (known after apply) 2026-04-05 00:02:18.325110 | orchestrator | + dns_domain = (known after apply) 2026-04-05 00:02:18.325114 | orchestrator | + dns_name = (known after apply) 2026-04-05 00:02:18.325118 | orchestrator | + fixed_ip = (known after apply) 2026-04-05 00:02:18.325121 | orchestrator | + id = (known after apply) 2026-04-05 00:02:18.325125 | orchestrator | + pool = "public" 2026-04-05 00:02:18.325129 | orchestrator | + port_id = (known after apply) 2026-04-05 00:02:18.325132 | orchestrator | + region = (known after apply) 2026-04-05 00:02:18.325136 | orchestrator | + subnet_id = (known after apply) 2026-04-05 00:02:18.325140 | orchestrator | + tenant_id = (known after apply) 2026-04-05 00:02:18.325144 | orchestrator | } 2026-04-05 00:02:18.325147 | orchestrator | 2026-04-05 00:02:18.325151 | orchestrator | # openstack_networking_network_v2.net_management will be created 2026-04-05 00:02:18.325155 | orchestrator | + resource "openstack_networking_network_v2" "net_management" { 2026-04-05 00:02:18.325159 | orchestrator | + admin_state_up = (known after apply) 2026-04-05 00:02:18.325162 | orchestrator | + all_tags = (known after apply) 2026-04-05 00:02:18.325166 | orchestrator | + availability_zone_hints = [ 2026-04-05 00:02:18.325170 | orchestrator | + "nova", 2026-04-05 00:02:18.325173 | orchestrator | ] 2026-04-05 00:02:18.325177 | orchestrator | + dns_domain = (known after apply) 2026-04-05 00:02:18.325181 | orchestrator | + external = (known after apply) 2026-04-05 00:02:18.325185 | orchestrator | + id = (known after apply) 2026-04-05 00:02:18.325188 | orchestrator | + mtu = (known after apply) 2026-04-05 00:02:18.325192 | orchestrator | + name = "net-testbed-management" 2026-04-05 00:02:18.325196 | orchestrator | + port_security_enabled = (known after apply) 2026-04-05 00:02:18.325201 | orchestrator | + qos_policy_id = (known after apply) 2026-04-05 00:02:18.325205 | orchestrator | + region = (known after apply) 2026-04-05 00:02:18.325209 | orchestrator | + shared = (known after apply) 2026-04-05 00:02:18.325213 | orchestrator | + tenant_id = (known after apply) 2026-04-05 00:02:18.325216 | orchestrator | + transparent_vlan = (known after apply) 2026-04-05 00:02:18.325220 | orchestrator | 2026-04-05 00:02:18.325224 | orchestrator | + segments (known after apply) 2026-04-05 00:02:18.325228 | orchestrator | } 2026-04-05 00:02:18.325231 | orchestrator | 2026-04-05 00:02:18.325235 | orchestrator | # openstack_networking_port_v2.manager_port_management will be created 2026-04-05 00:02:18.325239 | orchestrator | + resource "openstack_networking_port_v2" "manager_port_management" { 2026-04-05 00:02:18.325242 | orchestrator | + admin_state_up = (known after apply) 2026-04-05 00:02:18.325246 | orchestrator | + all_fixed_ips = (known after apply) 2026-04-05 00:02:18.325250 | orchestrator | + all_security_group_ids = (known after apply) 2026-04-05 00:02:18.325253 | orchestrator | + all_tags = (known after apply) 2026-04-05 00:02:18.325257 | orchestrator | + device_id = (known after apply) 2026-04-05 00:02:18.325261 | orchestrator | + device_owner = (known after apply) 2026-04-05 00:02:18.325265 | orchestrator | + dns_assignment = (known after apply) 2026-04-05 00:02:18.325268 | orchestrator | + dns_name = (known after apply) 2026-04-05 00:02:18.325275 | orchestrator | + id = (known after apply) 2026-04-05 00:02:18.325279 | orchestrator | + mac_address = (known after apply) 2026-04-05 00:02:18.325283 | orchestrator | + network_id = (known after apply) 2026-04-05 00:02:18.325286 | orchestrator | + port_security_enabled = (known after apply) 2026-04-05 00:02:18.325290 | orchestrator | + qos_policy_id = (known after apply) 2026-04-05 00:02:18.325294 | orchestrator | + region = (known after apply) 2026-04-05 00:02:18.325297 | orchestrator | + security_group_ids = (known after apply) 2026-04-05 00:02:18.325301 | orchestrator | + tenant_id = (known after apply) 2026-04-05 00:02:18.325305 | orchestrator | 2026-04-05 00:02:18.325308 | orchestrator | + allowed_address_pairs { 2026-04-05 00:02:18.325312 | orchestrator | + ip_address = "192.168.16.8/32" 2026-04-05 00:02:18.325316 | orchestrator | } 2026-04-05 00:02:18.325320 | orchestrator | 2026-04-05 00:02:18.325323 | orchestrator | + binding (known after apply) 2026-04-05 00:02:18.325327 | orchestrator | 2026-04-05 00:02:18.325331 | orchestrator | + fixed_ip { 2026-04-05 00:02:18.325335 | orchestrator | + ip_address = "192.168.16.5" 2026-04-05 00:02:18.325338 | orchestrator | + subnet_id = (known after apply) 2026-04-05 00:02:18.325342 | orchestrator | } 2026-04-05 00:02:18.325346 | orchestrator | } 2026-04-05 00:02:18.325350 | orchestrator | 2026-04-05 00:02:18.325353 | orchestrator | # openstack_networking_port_v2.node_port_management[0] will be created 2026-04-05 00:02:18.325357 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-04-05 00:02:18.325361 | orchestrator | + admin_state_up = (known after apply) 2026-04-05 00:02:18.325364 | orchestrator | + all_fixed_ips = (known after apply) 2026-04-05 00:02:18.325368 | orchestrator | + all_security_group_ids = (known after apply) 2026-04-05 00:02:18.325372 | orchestrator | + all_tags = (known after apply) 2026-04-05 00:02:18.325376 | orchestrator | + device_id = (known after apply) 2026-04-05 00:02:18.325379 | orchestrator | + device_owner = (known after apply) 2026-04-05 00:02:18.325383 | orchestrator | + dns_assignment = (known after apply) 2026-04-05 00:02:18.325387 | orchestrator | + dns_name = (known after apply) 2026-04-05 00:02:18.325390 | orchestrator | + id = (known after apply) 2026-04-05 00:02:18.325394 | orchestrator | + mac_address = (known after apply) 2026-04-05 00:02:18.325398 | orchestrator | + network_id = (known after apply) 2026-04-05 00:02:18.325401 | orchestrator | + port_security_enabled = (known after apply) 2026-04-05 00:02:18.325405 | orchestrator | + qos_policy_id = (known after apply) 2026-04-05 00:02:18.325409 | orchestrator | + region = (known after apply) 2026-04-05 00:02:18.325415 | orchestrator | + security_group_ids = (known after apply) 2026-04-05 00:02:18.325418 | orchestrator | + tenant_id = (known after apply) 2026-04-05 00:02:18.325422 | orchestrator | 2026-04-05 00:02:18.325426 | orchestrator | + allowed_address_pairs { 2026-04-05 00:02:18.325430 | orchestrator | + ip_address = "192.168.16.254/32" 2026-04-05 00:02:18.325433 | orchestrator | } 2026-04-05 00:02:18.325437 | orchestrator | + allowed_address_pairs { 2026-04-05 00:02:18.325441 | orchestrator | + ip_address = "192.168.16.8/32" 2026-04-05 00:02:18.325444 | orchestrator | } 2026-04-05 00:02:18.325448 | orchestrator | + allowed_address_pairs { 2026-04-05 00:02:18.325452 | orchestrator | + ip_address = "192.168.16.9/32" 2026-04-05 00:02:18.325456 | orchestrator | } 2026-04-05 00:02:18.325459 | orchestrator | 2026-04-05 00:02:18.325463 | orchestrator | + binding (known after apply) 2026-04-05 00:02:18.325467 | orchestrator | 2026-04-05 00:02:18.325470 | orchestrator | + fixed_ip { 2026-04-05 00:02:18.325474 | orchestrator | + ip_address = "192.168.16.10" 2026-04-05 00:02:18.325478 | orchestrator | + subnet_id = (known after apply) 2026-04-05 00:02:18.325481 | orchestrator | } 2026-04-05 00:02:18.325485 | orchestrator | } 2026-04-05 00:02:18.325489 | orchestrator | 2026-04-05 00:02:18.325493 | orchestrator | # openstack_networking_port_v2.node_port_management[1] will be created 2026-04-05 00:02:18.325496 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-04-05 00:02:18.325502 | orchestrator | + admin_state_up = (known after apply) 2026-04-05 00:02:18.325506 | orchestrator | + all_fixed_ips = (known after apply) 2026-04-05 00:02:18.325509 | orchestrator | + all_security_group_ids = (known after apply) 2026-04-05 00:02:18.325513 | orchestrator | + all_tags = (known after apply) 2026-04-05 00:02:18.325517 | orchestrator | + device_id = (known after apply) 2026-04-05 00:02:18.325521 | orchestrator | + device_owner = (known after apply) 2026-04-05 00:02:18.325524 | orchestrator | + dns_assignment = (known after apply) 2026-04-05 00:02:18.325528 | orchestrator | + dns_name = (known after apply) 2026-04-05 00:02:18.325532 | orchestrator | + id = (known after apply) 2026-04-05 00:02:18.325535 | orchestrator | + mac_address = (known after apply) 2026-04-05 00:02:18.325539 | orchestrator | + network_id = (known after apply) 2026-04-05 00:02:18.325543 | orchestrator | + port_security_enabled = (known after apply) 2026-04-05 00:02:18.325547 | orchestrator | + qos_policy_id = (known after apply) 2026-04-05 00:02:18.325550 | orchestrator | + region = (known after apply) 2026-04-05 00:02:18.325554 | orchestrator | + security_group_ids = (known after apply) 2026-04-05 00:02:18.325558 | orchestrator | + tenant_id = (known after apply) 2026-04-05 00:02:18.325561 | orchestrator | 2026-04-05 00:02:18.325565 | orchestrator | + allowed_address_pairs { 2026-04-05 00:02:18.325569 | orchestrator | + ip_address = "192.168.16.254/32" 2026-04-05 00:02:18.325573 | orchestrator | } 2026-04-05 00:02:18.325576 | orchestrator | + allowed_address_pairs { 2026-04-05 00:02:18.325580 | orchestrator | + ip_address = "192.168.16.8/32" 2026-04-05 00:02:18.325584 | orchestrator | } 2026-04-05 00:02:18.325587 | orchestrator | + allowed_address_pairs { 2026-04-05 00:02:18.325591 | orchestrator | + ip_address = "192.168.16.9/32" 2026-04-05 00:02:18.325595 | orchestrator | } 2026-04-05 00:02:18.325599 | orchestrator | 2026-04-05 00:02:18.325602 | orchestrator | + binding (known after apply) 2026-04-05 00:02:18.325606 | orchestrator | 2026-04-05 00:02:18.325610 | orchestrator | + fixed_ip { 2026-04-05 00:02:18.325614 | orchestrator | + ip_address = "192.168.16.11" 2026-04-05 00:02:18.325617 | orchestrator | + subnet_id = (known after apply) 2026-04-05 00:02:18.325621 | orchestrator | } 2026-04-05 00:02:18.325625 | orchestrator | } 2026-04-05 00:02:18.325631 | orchestrator | 2026-04-05 00:02:18.325634 | orchestrator | # openstack_networking_port_v2.node_port_management[2] will be created 2026-04-05 00:02:18.325638 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-04-05 00:02:18.325642 | orchestrator | + admin_state_up = (known after apply) 2026-04-05 00:02:18.325646 | orchestrator | + all_fixed_ips = (known after apply) 2026-04-05 00:02:18.325649 | orchestrator | + all_security_group_ids = (known after apply) 2026-04-05 00:02:18.325653 | orchestrator | + all_tags = (known after apply) 2026-04-05 00:02:18.325660 | orchestrator | + device_id = (known after apply) 2026-04-05 00:02:18.325663 | orchestrator | + device_owner = (known after apply) 2026-04-05 00:02:18.325667 | orchestrator | + dns_assignment = (known after apply) 2026-04-05 00:02:18.325671 | orchestrator | + dns_name = (known after apply) 2026-04-05 00:02:18.325674 | orchestrator | + id = (known after apply) 2026-04-05 00:02:18.325678 | orchestrator | + mac_address = (known after apply) 2026-04-05 00:02:18.325682 | orchestrator | + network_id = (known after apply) 2026-04-05 00:02:18.325686 | orchestrator | + port_security_enabled = (known after apply) 2026-04-05 00:02:18.325689 | orchestrator | + qos_policy_id = (known after apply) 2026-04-05 00:02:18.325693 | orchestrator | + region = (known after apply) 2026-04-05 00:02:18.325697 | orchestrator | + security_group_ids = (known after apply) 2026-04-05 00:02:18.325700 | orchestrator | + tenant_id = (known after apply) 2026-04-05 00:02:18.325704 | orchestrator | 2026-04-05 00:02:18.325708 | orchestrator | + allowed_address_pairs { 2026-04-05 00:02:18.325711 | orchestrator | + ip_address = "192.168.16.254/32" 2026-04-05 00:02:18.325715 | orchestrator | } 2026-04-05 00:02:18.325719 | orchestrator | + allowed_address_pairs { 2026-04-05 00:02:18.325723 | orchestrator | + ip_address = "192.168.16.8/32" 2026-04-05 00:02:18.325726 | orchestrator | } 2026-04-05 00:02:18.325730 | orchestrator | + allowed_address_pairs { 2026-04-05 00:02:18.325734 | orchestrator | + ip_address = "192.168.16.9/32" 2026-04-05 00:02:18.325737 | orchestrator | } 2026-04-05 00:02:18.325741 | orchestrator | 2026-04-05 00:02:18.325745 | orchestrator | + binding (known after apply) 2026-04-05 00:02:18.325748 | orchestrator | 2026-04-05 00:02:18.325752 | orchestrator | + fixed_ip { 2026-04-05 00:02:18.325756 | orchestrator | + ip_address = "192.168.16.12" 2026-04-05 00:02:18.325760 | orchestrator | + subnet_id = (known after apply) 2026-04-05 00:02:18.325763 | orchestrator | } 2026-04-05 00:02:18.325767 | orchestrator | } 2026-04-05 00:02:18.325771 | orchestrator | 2026-04-05 00:02:18.325774 | orchestrator | # openstack_networking_port_v2.node_port_management[3] will be created 2026-04-05 00:02:18.325778 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-04-05 00:02:18.325782 | orchestrator | + admin_state_up = (known after apply) 2026-04-05 00:02:18.325785 | orchestrator | + all_fixed_ips = (known after apply) 2026-04-05 00:02:18.325789 | orchestrator | + all_security_group_ids = (known after apply) 2026-04-05 00:02:18.325793 | orchestrator | + all_tags = (known after apply) 2026-04-05 00:02:18.325797 | orchestrator | + device_id = (known after apply) 2026-04-05 00:02:18.325800 | orchestrator | + device_owner = (known after apply) 2026-04-05 00:02:18.325804 | orchestrator | + dns_assignment = (known after apply) 2026-04-05 00:02:18.325808 | orchestrator | + dns_name = (known after apply) 2026-04-05 00:02:18.325811 | orchestrator | + id = (known after apply) 2026-04-05 00:02:18.325815 | orchestrator | + mac_address = (known after apply) 2026-04-05 00:02:18.325819 | orchestrator | + network_id = (known after apply) 2026-04-05 00:02:18.325823 | orchestrator | + port_security_enabled = (known after apply) 2026-04-05 00:02:18.325826 | orchestrator | + qos_policy_id = (known after apply) 2026-04-05 00:02:18.325830 | orchestrator | + region = (known after apply) 2026-04-05 00:02:18.325834 | orchestrator | + security_group_ids = (known after apply) 2026-04-05 00:02:18.325837 | orchestrator | + tenant_id = (known after apply) 2026-04-05 00:02:18.325841 | orchestrator | 2026-04-05 00:02:18.325845 | orchestrator | + allowed_address_pairs { 2026-04-05 00:02:18.325849 | orchestrator | + ip_address = "192.168.16.254/32" 2026-04-05 00:02:18.325852 | orchestrator | } 2026-04-05 00:02:18.325856 | orchestrator | + allowed_address_pairs { 2026-04-05 00:02:18.325860 | orchestrator | + ip_address = "192.168.16.8/32" 2026-04-05 00:02:18.325863 | orchestrator | } 2026-04-05 00:02:18.325867 | orchestrator | + allowed_address_pairs { 2026-04-05 00:02:18.325871 | orchestrator | + ip_address = "192.168.16.9/32" 2026-04-05 00:02:18.325874 | orchestrator | } 2026-04-05 00:02:18.325878 | orchestrator | 2026-04-05 00:02:18.325896 | orchestrator | + binding (known after apply) 2026-04-05 00:02:18.325900 | orchestrator | 2026-04-05 00:02:18.325904 | orchestrator | + fixed_ip { 2026-04-05 00:02:18.325908 | orchestrator | + ip_address = "192.168.16.13" 2026-04-05 00:02:18.325912 | orchestrator | + subnet_id = (known after apply) 2026-04-05 00:02:18.325915 | orchestrator | } 2026-04-05 00:02:18.325919 | orchestrator | } 2026-04-05 00:02:18.325924 | orchestrator | 2026-04-05 00:02:18.325928 | orchestrator | # openstack_networking_port_v2.node_port_management[4] will be created 2026-04-05 00:02:18.325932 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-04-05 00:02:18.325936 | orchestrator | + admin_state_up = (known after apply) 2026-04-05 00:02:18.325939 | orchestrator | + all_fixed_ips = (known after apply) 2026-04-05 00:02:18.325943 | orchestrator | + all_security_group_ids = (known after apply) 2026-04-05 00:02:18.325947 | orchestrator | + all_tags = (known after apply) 2026-04-05 00:02:18.325950 | orchestrator | + device_id = (known after apply) 2026-04-05 00:02:18.325954 | orchestrator | + device_owner = (known after apply) 2026-04-05 00:02:18.325958 | orchestrator | + dns_assignment = (known after apply) 2026-04-05 00:02:18.325961 | orchestrator | + dns_name = (known after apply) 2026-04-05 00:02:18.325967 | orchestrator | + id = (known after apply) 2026-04-05 00:02:18.325971 | orchestrator | + mac_address = (known after apply) 2026-04-05 00:02:18.325975 | orchestrator | + network_id = (known after apply) 2026-04-05 00:02:18.325979 | orchestrator | + port_security_enabled = (known after apply) 2026-04-05 00:02:18.325982 | orchestrator | + qos_policy_id = (known after apply) 2026-04-05 00:02:18.325986 | orchestrator | + region = (known after apply) 2026-04-05 00:02:18.325990 | orchestrator | + security_group_ids = (known after apply) 2026-04-05 00:02:18.325993 | orchestrator | + tenant_id = (known after apply) 2026-04-05 00:02:18.325997 | orchestrator | 2026-04-05 00:02:18.326001 | orchestrator | + allowed_address_pairs { 2026-04-05 00:02:18.326007 | orchestrator | + ip_address = "192.168.16.254/32" 2026-04-05 00:02:18.326036 | orchestrator | } 2026-04-05 00:02:18.326041 | orchestrator | + allowed_address_pairs { 2026-04-05 00:02:18.326045 | orchestrator | + ip_address = "192.168.16.8/32" 2026-04-05 00:02:18.326049 | orchestrator | } 2026-04-05 00:02:18.326053 | orchestrator | + allowed_address_pairs { 2026-04-05 00:02:18.326056 | orchestrator | + ip_address = "192.168.16.9/32" 2026-04-05 00:02:18.326060 | orchestrator | } 2026-04-05 00:02:18.326064 | orchestrator | 2026-04-05 00:02:18.326068 | orchestrator | + binding (known after apply) 2026-04-05 00:02:18.326072 | orchestrator | 2026-04-05 00:02:18.326075 | orchestrator | + fixed_ip { 2026-04-05 00:02:18.326079 | orchestrator | + ip_address = "192.168.16.14" 2026-04-05 00:02:18.326083 | orchestrator | + subnet_id = (known after apply) 2026-04-05 00:02:18.326087 | orchestrator | } 2026-04-05 00:02:18.326090 | orchestrator | } 2026-04-05 00:02:18.326094 | orchestrator | 2026-04-05 00:02:18.326098 | orchestrator | # openstack_networking_port_v2.node_port_management[5] will be created 2026-04-05 00:02:18.326102 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-04-05 00:02:18.326105 | orchestrator | + admin_state_up = (known after apply) 2026-04-05 00:02:18.326109 | orchestrator | + all_fixed_ips = (known after apply) 2026-04-05 00:02:18.326113 | orchestrator | + all_security_group_ids = (known after apply) 2026-04-05 00:02:18.326117 | orchestrator | + all_tags = (known after apply) 2026-04-05 00:02:18.326121 | orchestrator | + device_id = (known after apply) 2026-04-05 00:02:18.326124 | orchestrator | + device_owner = (known after apply) 2026-04-05 00:02:18.326128 | orchestrator | + dns_assignment = (known after apply) 2026-04-05 00:02:18.326132 | orchestrator | + dns_name = (known after apply) 2026-04-05 00:02:18.326135 | orchestrator | + id = (known after apply) 2026-04-05 00:02:18.326139 | orchestrator | + mac_address = (known after apply) 2026-04-05 00:02:18.326143 | orchestrator | + network_id = (known after apply) 2026-04-05 00:02:18.326147 | orchestrator | + port_security_enabled = (known after apply) 2026-04-05 00:02:18.326150 | orchestrator | + qos_policy_id = (known after apply) 2026-04-05 00:02:18.326158 | orchestrator | + region = (known after apply) 2026-04-05 00:02:18.326162 | orchestrator | + security_group_ids = (known after apply) 2026-04-05 00:02:18.326166 | orchestrator | + tenant_id = (known after apply) 2026-04-05 00:02:18.326169 | orchestrator | 2026-04-05 00:02:18.326173 | orchestrator | + allowed_address_pairs { 2026-04-05 00:02:18.326177 | orchestrator | + ip_address = "192.168.16.254/32" 2026-04-05 00:02:18.326181 | orchestrator | } 2026-04-05 00:02:18.326184 | orchestrator | + allowed_address_pairs { 2026-04-05 00:02:18.326188 | orchestrator | + ip_address = "192.168.16.8/32" 2026-04-05 00:02:18.326192 | orchestrator | } 2026-04-05 00:02:18.326196 | orchestrator | + allowed_address_pairs { 2026-04-05 00:02:18.326199 | orchestrator | + ip_address = "192.168.16.9/32" 2026-04-05 00:02:18.326203 | orchestrator | } 2026-04-05 00:02:18.326207 | orchestrator | 2026-04-05 00:02:18.326211 | orchestrator | + binding (known after apply) 2026-04-05 00:02:18.326214 | orchestrator | 2026-04-05 00:02:18.326218 | orchestrator | + fixed_ip { 2026-04-05 00:02:18.326222 | orchestrator | + ip_address = "192.168.16.15" 2026-04-05 00:02:18.326226 | orchestrator | + subnet_id = (known after apply) 2026-04-05 00:02:18.326229 | orchestrator | } 2026-04-05 00:02:18.326233 | orchestrator | } 2026-04-05 00:02:18.326239 | orchestrator | 2026-04-05 00:02:18.326243 | orchestrator | # openstack_networking_router_interface_v2.router_interface will be created 2026-04-05 00:02:18.326247 | orchestrator | + resource "openstack_networking_router_interface_v2" "router_interface" { 2026-04-05 00:02:18.326251 | orchestrator | + force_destroy = false 2026-04-05 00:02:18.326254 | orchestrator | + id = (known after apply) 2026-04-05 00:02:18.326258 | orchestrator | + port_id = (known after apply) 2026-04-05 00:02:18.326262 | orchestrator | + region = (known after apply) 2026-04-05 00:02:18.326265 | orchestrator | + router_id = (known after apply) 2026-04-05 00:02:18.326269 | orchestrator | + subnet_id = (known after apply) 2026-04-05 00:02:18.326273 | orchestrator | } 2026-04-05 00:02:18.326276 | orchestrator | 2026-04-05 00:02:18.326280 | orchestrator | # openstack_networking_router_v2.router will be created 2026-04-05 00:02:18.326284 | orchestrator | + resource "openstack_networking_router_v2" "router" { 2026-04-05 00:02:18.326288 | orchestrator | + admin_state_up = (known after apply) 2026-04-05 00:02:18.326291 | orchestrator | + all_tags = (known after apply) 2026-04-05 00:02:18.326295 | orchestrator | + availability_zone_hints = [ 2026-04-05 00:02:18.326299 | orchestrator | + "nova", 2026-04-05 00:02:18.326303 | orchestrator | ] 2026-04-05 00:02:18.326306 | orchestrator | + distributed = (known after apply) 2026-04-05 00:02:18.326310 | orchestrator | + enable_snat = (known after apply) 2026-04-05 00:02:18.326314 | orchestrator | + external_network_id = "e6be7364-bfd8-4de7-8120-8f41c69a139a" 2026-04-05 00:02:18.326317 | orchestrator | + external_qos_policy_id = (known after apply) 2026-04-05 00:02:18.326321 | orchestrator | + id = (known after apply) 2026-04-05 00:02:18.326325 | orchestrator | + name = "testbed" 2026-04-05 00:02:18.326329 | orchestrator | + region = (known after apply) 2026-04-05 00:02:18.326332 | orchestrator | + tenant_id = (known after apply) 2026-04-05 00:02:18.326336 | orchestrator | 2026-04-05 00:02:18.326340 | orchestrator | + external_fixed_ip (known after apply) 2026-04-05 00:02:18.326344 | orchestrator | } 2026-04-05 00:02:18.326347 | orchestrator | 2026-04-05 00:02:18.326351 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule1 will be created 2026-04-05 00:02:18.326355 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule1" { 2026-04-05 00:02:18.326358 | orchestrator | + description = "ssh" 2026-04-05 00:02:18.326362 | orchestrator | + direction = "ingress" 2026-04-05 00:02:18.326366 | orchestrator | + ethertype = "IPv4" 2026-04-05 00:02:18.326369 | orchestrator | + id = (known after apply) 2026-04-05 00:02:18.326373 | orchestrator | + port_range_max = 22 2026-04-05 00:02:18.326377 | orchestrator | + port_range_min = 22 2026-04-05 00:02:18.326380 | orchestrator | + protocol = "tcp" 2026-04-05 00:02:18.326384 | orchestrator | + region = (known after apply) 2026-04-05 00:02:18.326391 | orchestrator | + remote_address_group_id = (known after apply) 2026-04-05 00:02:18.326394 | orchestrator | + remote_group_id = (known after apply) 2026-04-05 00:02:18.326398 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-04-05 00:02:18.326402 | orchestrator | + security_group_id = (known after apply) 2026-04-05 00:02:18.326405 | orchestrator | + tenant_id = (known after apply) 2026-04-05 00:02:18.326409 | orchestrator | } 2026-04-05 00:02:18.326415 | orchestrator | 2026-04-05 00:02:18.326418 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule2 will be created 2026-04-05 00:02:18.326422 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule2" { 2026-04-05 00:02:18.326426 | orchestrator | + description = "wireguard" 2026-04-05 00:02:18.326429 | orchestrator | + direction = "ingress" 2026-04-05 00:02:18.326433 | orchestrator | + ethertype = "IPv4" 2026-04-05 00:02:18.326437 | orchestrator | + id = (known after apply) 2026-04-05 00:02:18.326440 | orchestrator | + port_range_max = 51820 2026-04-05 00:02:18.326444 | orchestrator | + port_range_min = 51820 2026-04-05 00:02:18.326448 | orchestrator | + protocol = "udp" 2026-04-05 00:02:18.326451 | orchestrator | + region = (known after apply) 2026-04-05 00:02:18.326455 | orchestrator | + remote_address_group_id = (known after apply) 2026-04-05 00:02:18.326459 | orchestrator | + remote_group_id = (known after apply) 2026-04-05 00:02:18.326463 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-04-05 00:02:18.326466 | orchestrator | + security_group_id = (known after apply) 2026-04-05 00:02:18.326470 | orchestrator | + tenant_id = (known after apply) 2026-04-05 00:02:18.326474 | orchestrator | } 2026-04-05 00:02:18.326477 | orchestrator | 2026-04-05 00:02:18.326481 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule3 will be created 2026-04-05 00:02:18.326485 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule3" { 2026-04-05 00:02:18.326491 | orchestrator | + direction = "ingress" 2026-04-05 00:02:18.326495 | orchestrator | + ethertype = "IPv4" 2026-04-05 00:02:18.326499 | orchestrator | + id = (known after apply) 2026-04-05 00:02:18.326502 | orchestrator | + protocol = "tcp" 2026-04-05 00:02:18.326506 | orchestrator | + region = (known after apply) 2026-04-05 00:02:18.326510 | orchestrator | + remote_address_group_id = (known after apply) 2026-04-05 00:02:18.326513 | orchestrator | + remote_group_id = (known after apply) 2026-04-05 00:02:18.326517 | orchestrator | + remote_ip_prefix = "192.168.16.0/20" 2026-04-05 00:02:18.326521 | orchestrator | + security_group_id = (known after apply) 2026-04-05 00:02:18.326524 | orchestrator | + tenant_id = (known after apply) 2026-04-05 00:02:18.326528 | orchestrator | } 2026-04-05 00:02:18.326533 | orchestrator | 2026-04-05 00:02:18.326537 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule4 will be created 2026-04-05 00:02:18.326541 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule4" { 2026-04-05 00:02:18.326545 | orchestrator | + direction = "ingress" 2026-04-05 00:02:18.326548 | orchestrator | + ethertype = "IPv4" 2026-04-05 00:02:18.326552 | orchestrator | + id = (known after apply) 2026-04-05 00:02:18.326556 | orchestrator | + protocol = "udp" 2026-04-05 00:02:18.326559 | orchestrator | + region = (known after apply) 2026-04-05 00:02:18.326563 | orchestrator | + remote_address_group_id = (known after apply) 2026-04-05 00:02:18.326567 | orchestrator | + remote_group_id = (known after apply) 2026-04-05 00:02:18.326571 | orchestrator | + remote_ip_prefix = "192.168.16.0/20" 2026-04-05 00:02:18.326574 | orchestrator | + security_group_id = (known after apply) 2026-04-05 00:02:18.326578 | orchestrator | + tenant_id = (known after apply) 2026-04-05 00:02:18.326582 | orchestrator | } 2026-04-05 00:02:18.326587 | orchestrator | 2026-04-05 00:02:18.326591 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule5 will be created 2026-04-05 00:02:18.326597 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule5" { 2026-04-05 00:02:18.326601 | orchestrator | + direction = "ingress" 2026-04-05 00:02:18.326605 | orchestrator | + ethertype = "IPv4" 2026-04-05 00:02:18.326608 | orchestrator | + id = (known after apply) 2026-04-05 00:02:18.326612 | orchestrator | + protocol = "icmp" 2026-04-05 00:02:18.326616 | orchestrator | + region = (known after apply) 2026-04-05 00:02:18.326619 | orchestrator | + remote_address_group_id = (known after apply) 2026-04-05 00:02:18.326623 | orchestrator | + remote_group_id = (known after apply) 2026-04-05 00:02:18.326627 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-04-05 00:02:18.326630 | orchestrator | + security_group_id = (known after apply) 2026-04-05 00:02:18.326634 | orchestrator | + tenant_id = (known after apply) 2026-04-05 00:02:18.326638 | orchestrator | } 2026-04-05 00:02:18.326643 | orchestrator | 2026-04-05 00:02:18.326647 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_node_rule1 will be created 2026-04-05 00:02:18.326650 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule1" { 2026-04-05 00:02:18.326654 | orchestrator | + direction = "ingress" 2026-04-05 00:02:18.326658 | orchestrator | + ethertype = "IPv4" 2026-04-05 00:02:18.326662 | orchestrator | + id = (known after apply) 2026-04-05 00:02:18.326665 | orchestrator | + protocol = "tcp" 2026-04-05 00:02:18.326669 | orchestrator | + region = (known after apply) 2026-04-05 00:02:18.326673 | orchestrator | + remote_address_group_id = (known after apply) 2026-04-05 00:02:18.326677 | orchestrator | + remote_group_id = (known after apply) 2026-04-05 00:02:18.326680 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-04-05 00:02:18.326684 | orchestrator | + security_group_id = (known after apply) 2026-04-05 00:02:18.326688 | orchestrator | + tenant_id = (known after apply) 2026-04-05 00:02:18.326691 | orchestrator | } 2026-04-05 00:02:18.334128 | orchestrator | 2026-04-05 00:02:18.334262 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_node_rule2 will be created 2026-04-05 00:02:18.334273 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule2" { 2026-04-05 00:02:18.334281 | orchestrator | + direction = "ingress" 2026-04-05 00:02:18.334288 | orchestrator | + ethertype = "IPv4" 2026-04-05 00:02:18.334296 | orchestrator | + id = (known after apply) 2026-04-05 00:02:18.334303 | orchestrator | + protocol = "udp" 2026-04-05 00:02:18.334310 | orchestrator | + region = (known after apply) 2026-04-05 00:02:18.334317 | orchestrator | + remote_address_group_id = (known after apply) 2026-04-05 00:02:18.334324 | orchestrator | + remote_group_id = (known after apply) 2026-04-05 00:02:18.334331 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-04-05 00:02:18.334337 | orchestrator | + security_group_id = (known after apply) 2026-04-05 00:02:18.334344 | orchestrator | + tenant_id = (known after apply) 2026-04-05 00:02:18.334351 | orchestrator | } 2026-04-05 00:02:18.334358 | orchestrator | 2026-04-05 00:02:18.334365 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_node_rule3 will be created 2026-04-05 00:02:18.334372 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule3" { 2026-04-05 00:02:18.334379 | orchestrator | + direction = "ingress" 2026-04-05 00:02:18.334386 | orchestrator | + ethertype = "IPv4" 2026-04-05 00:02:18.334393 | orchestrator | + id = (known after apply) 2026-04-05 00:02:18.334400 | orchestrator | + protocol = "icmp" 2026-04-05 00:02:18.334407 | orchestrator | + region = (known after apply) 2026-04-05 00:02:18.334414 | orchestrator | + remote_address_group_id = (known after apply) 2026-04-05 00:02:18.334421 | orchestrator | + remote_group_id = (known after apply) 2026-04-05 00:02:18.334428 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-04-05 00:02:18.334435 | orchestrator | + security_group_id = (known after apply) 2026-04-05 00:02:18.334442 | orchestrator | + tenant_id = (known after apply) 2026-04-05 00:02:18.334465 | orchestrator | } 2026-04-05 00:02:18.334472 | orchestrator | 2026-04-05 00:02:18.334479 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_rule_vrrp will be created 2026-04-05 00:02:18.334486 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_rule_vrrp" { 2026-04-05 00:02:18.334494 | orchestrator | + description = "vrrp" 2026-04-05 00:02:18.334501 | orchestrator | + direction = "ingress" 2026-04-05 00:02:18.334508 | orchestrator | + ethertype = "IPv4" 2026-04-05 00:02:18.334515 | orchestrator | + id = (known after apply) 2026-04-05 00:02:18.334522 | orchestrator | + protocol = "112" 2026-04-05 00:02:18.334529 | orchestrator | + region = (known after apply) 2026-04-05 00:02:18.334536 | orchestrator | + remote_address_group_id = (known after apply) 2026-04-05 00:02:18.334543 | orchestrator | + remote_group_id = (known after apply) 2026-04-05 00:02:18.334550 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-04-05 00:02:18.334556 | orchestrator | + security_group_id = (known after apply) 2026-04-05 00:02:18.334563 | orchestrator | + tenant_id = (known after apply) 2026-04-05 00:02:18.334571 | orchestrator | } 2026-04-05 00:02:18.334578 | orchestrator | 2026-04-05 00:02:18.334585 | orchestrator | # openstack_networking_secgroup_v2.security_group_management will be created 2026-04-05 00:02:18.334592 | orchestrator | + resource "openstack_networking_secgroup_v2" "security_group_management" { 2026-04-05 00:02:18.334599 | orchestrator | + all_tags = (known after apply) 2026-04-05 00:02:18.334606 | orchestrator | + description = "management security group" 2026-04-05 00:02:18.334613 | orchestrator | + id = (known after apply) 2026-04-05 00:02:18.334620 | orchestrator | + name = "testbed-management" 2026-04-05 00:02:18.334627 | orchestrator | + region = (known after apply) 2026-04-05 00:02:18.334634 | orchestrator | + stateful = (known after apply) 2026-04-05 00:02:18.334640 | orchestrator | + tenant_id = (known after apply) 2026-04-05 00:02:18.334647 | orchestrator | } 2026-04-05 00:02:18.334654 | orchestrator | 2026-04-05 00:02:18.334661 | orchestrator | # openstack_networking_secgroup_v2.security_group_node will be created 2026-04-05 00:02:18.334668 | orchestrator | + resource "openstack_networking_secgroup_v2" "security_group_node" { 2026-04-05 00:02:18.334675 | orchestrator | + all_tags = (known after apply) 2026-04-05 00:02:18.334682 | orchestrator | + description = "node security group" 2026-04-05 00:02:18.334689 | orchestrator | + id = (known after apply) 2026-04-05 00:02:18.334696 | orchestrator | + name = "testbed-node" 2026-04-05 00:02:18.334703 | orchestrator | + region = (known after apply) 2026-04-05 00:02:18.334709 | orchestrator | + stateful = (known after apply) 2026-04-05 00:02:18.334716 | orchestrator | + tenant_id = (known after apply) 2026-04-05 00:02:18.334723 | orchestrator | } 2026-04-05 00:02:18.334730 | orchestrator | 2026-04-05 00:02:18.334737 | orchestrator | # openstack_networking_subnet_v2.subnet_management will be created 2026-04-05 00:02:18.334744 | orchestrator | + resource "openstack_networking_subnet_v2" "subnet_management" { 2026-04-05 00:02:18.334751 | orchestrator | + all_tags = (known after apply) 2026-04-05 00:02:18.334758 | orchestrator | + cidr = "192.168.16.0/20" 2026-04-05 00:02:18.334765 | orchestrator | + dns_nameservers = [ 2026-04-05 00:02:18.334772 | orchestrator | + "8.8.8.8", 2026-04-05 00:02:18.334779 | orchestrator | + "9.9.9.9", 2026-04-05 00:02:18.334787 | orchestrator | ] 2026-04-05 00:02:18.334793 | orchestrator | + enable_dhcp = true 2026-04-05 00:02:18.334800 | orchestrator | + gateway_ip = (known after apply) 2026-04-05 00:02:18.334815 | orchestrator | + id = (known after apply) 2026-04-05 00:02:18.334823 | orchestrator | + ip_version = 4 2026-04-05 00:02:18.334830 | orchestrator | + ipv6_address_mode = (known after apply) 2026-04-05 00:02:18.334837 | orchestrator | + ipv6_ra_mode = (known after apply) 2026-04-05 00:02:18.334844 | orchestrator | + name = "subnet-testbed-management" 2026-04-05 00:02:18.334851 | orchestrator | + network_id = (known after apply) 2026-04-05 00:02:18.334858 | orchestrator | + no_gateway = false 2026-04-05 00:02:18.334864 | orchestrator | + region = (known after apply) 2026-04-05 00:02:18.334871 | orchestrator | + service_types = (known after apply) 2026-04-05 00:02:18.334882 | orchestrator | + tenant_id = (known after apply) 2026-04-05 00:02:18.334907 | orchestrator | 2026-04-05 00:02:18.334914 | orchestrator | + allocation_pool { 2026-04-05 00:02:18.334921 | orchestrator | + end = "192.168.31.250" 2026-04-05 00:02:18.334928 | orchestrator | + start = "192.168.31.200" 2026-04-05 00:02:18.334935 | orchestrator | } 2026-04-05 00:02:18.334943 | orchestrator | } 2026-04-05 00:02:18.334950 | orchestrator | 2026-04-05 00:02:18.334957 | orchestrator | # terraform_data.image will be created 2026-04-05 00:02:18.334964 | orchestrator | + resource "terraform_data" "image" { 2026-04-05 00:02:18.334971 | orchestrator | + id = (known after apply) 2026-04-05 00:02:18.334978 | orchestrator | + input = "Ubuntu 24.04" 2026-04-05 00:02:18.334999 | orchestrator | + output = (known after apply) 2026-04-05 00:02:18.335006 | orchestrator | } 2026-04-05 00:02:18.335013 | orchestrator | 2026-04-05 00:02:18.335020 | orchestrator | # terraform_data.image_node will be created 2026-04-05 00:02:18.335026 | orchestrator | + resource "terraform_data" "image_node" { 2026-04-05 00:02:18.335034 | orchestrator | + id = (known after apply) 2026-04-05 00:02:18.335041 | orchestrator | + input = "Ubuntu 24.04" 2026-04-05 00:02:18.335048 | orchestrator | + output = (known after apply) 2026-04-05 00:02:18.335055 | orchestrator | } 2026-04-05 00:02:18.335062 | orchestrator | 2026-04-05 00:02:18.335070 | orchestrator | Plan: 64 to add, 0 to change, 0 to destroy. 2026-04-05 00:02:18.335076 | orchestrator | 2026-04-05 00:02:18.335083 | orchestrator | Changes to Outputs: 2026-04-05 00:02:18.335090 | orchestrator | + manager_address = (sensitive value) 2026-04-05 00:02:18.335097 | orchestrator | + private_key = (sensitive value) 2026-04-05 00:02:22.323983 | orchestrator | terraform_data.image_node: Creating... 2026-04-05 00:02:22.324081 | orchestrator | terraform_data.image_node: Creation complete after 0s [id=703d8345-cf50-e5ee-01fb-0b9bcc4d80f8] 2026-04-05 00:02:22.324096 | orchestrator | terraform_data.image: Creating... 2026-04-05 00:02:22.328527 | orchestrator | terraform_data.image: Creation complete after 0s [id=a026faaf-b182-b0c0-4872-5742c6ad3d3d] 2026-04-05 00:02:22.335630 | orchestrator | data.openstack_images_image_v2.image: Reading... 2026-04-05 00:02:22.336536 | orchestrator | data.openstack_images_image_v2.image_node: Reading... 2026-04-05 00:02:22.340805 | orchestrator | openstack_compute_keypair_v2.key: Creating... 2026-04-05 00:02:22.350704 | orchestrator | openstack_blockstorage_volume_v3.node_volume[4]: Creating... 2026-04-05 00:02:22.352161 | orchestrator | openstack_blockstorage_volume_v3.node_volume[1]: Creating... 2026-04-05 00:02:22.354965 | orchestrator | openstack_blockstorage_volume_v3.node_volume[3]: Creating... 2026-04-05 00:02:22.365790 | orchestrator | openstack_networking_network_v2.net_management: Creating... 2026-04-05 00:02:22.365824 | orchestrator | openstack_blockstorage_volume_v3.node_volume[7]: Creating... 2026-04-05 00:02:22.366697 | orchestrator | openstack_blockstorage_volume_v3.node_volume[0]: Creating... 2026-04-05 00:02:22.375774 | orchestrator | openstack_blockstorage_volume_v3.node_volume[8]: Creating... 2026-04-05 00:02:22.842281 | orchestrator | openstack_compute_keypair_v2.key: Creation complete after 1s [id=testbed] 2026-04-05 00:02:22.843550 | orchestrator | data.openstack_images_image_v2.image: Read complete after 1s [id=846820b2-039e-4b42-adad-daf72e0f8ea4] 2026-04-05 00:02:22.846531 | orchestrator | openstack_blockstorage_volume_v3.node_volume[5]: Creating... 2026-04-05 00:02:22.851527 | orchestrator | openstack_blockstorage_volume_v3.node_volume[2]: Creating... 2026-04-05 00:02:23.122990 | orchestrator | data.openstack_images_image_v2.image_node: Read complete after 1s [id=846820b2-039e-4b42-adad-daf72e0f8ea4] 2026-04-05 00:02:23.127408 | orchestrator | openstack_blockstorage_volume_v3.node_volume[6]: Creating... 2026-04-05 00:02:23.439010 | orchestrator | openstack_networking_network_v2.net_management: Creation complete after 1s [id=c80aa245-11ab-417d-b33b-ab37f38911ce] 2026-04-05 00:02:23.452102 | orchestrator | local_file.id_rsa_pub: Creating... 2026-04-05 00:02:23.456237 | orchestrator | local_file.id_rsa_pub: Creation complete after 0s [id=37838ddc8e467d1183f017bd540ad5263ab35f6d] 2026-04-05 00:02:23.471748 | orchestrator | local_sensitive_file.id_rsa: Creating... 2026-04-05 00:02:23.474477 | orchestrator | local_sensitive_file.id_rsa: Creation complete after 0s [id=669d331d0e57c0caf842971f0b341930dfa7beb5] 2026-04-05 00:02:23.478162 | orchestrator | openstack_blockstorage_volume_v3.manager_base_volume[0]: Creating... 2026-04-05 00:02:25.955787 | orchestrator | openstack_blockstorage_volume_v3.node_volume[1]: Creation complete after 4s [id=ca139ca2-9428-4862-b2c5-b387113f92e8] 2026-04-05 00:02:25.962794 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[3]: Creating... 2026-04-05 00:02:25.966392 | orchestrator | openstack_blockstorage_volume_v3.node_volume[3]: Creation complete after 4s [id=98068efd-febf-4a3d-a208-2ec8969defa3] 2026-04-05 00:02:25.970239 | orchestrator | openstack_networking_subnet_v2.subnet_management: Creating... 2026-04-05 00:02:25.984864 | orchestrator | openstack_blockstorage_volume_v3.node_volume[4]: Creation complete after 4s [id=38b6e962-bf0a-4437-92be-df56b43fc17a] 2026-04-05 00:02:25.994467 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[1]: Creating... 2026-04-05 00:02:26.010301 | orchestrator | openstack_blockstorage_volume_v3.node_volume[0]: Creation complete after 4s [id=7e73ac44-76fe-4853-8c7e-76a35261b68e] 2026-04-05 00:02:26.019127 | orchestrator | openstack_blockstorage_volume_v3.node_volume[7]: Creation complete after 4s [id=cd3e0233-fa53-4a76-8124-17084efe5189] 2026-04-05 00:02:26.027991 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[5]: Creating... 2026-04-05 00:02:26.028183 | orchestrator | openstack_blockstorage_volume_v3.node_volume[8]: Creation complete after 4s [id=f126a5b7-c683-4f0b-86b6-1ac9b9bcd2da] 2026-04-05 00:02:26.032520 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[2]: Creating... 2026-04-05 00:02:26.042435 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[0]: Creating... 2026-04-05 00:02:26.078253 | orchestrator | openstack_blockstorage_volume_v3.node_volume[2]: Creation complete after 3s [id=16d4ab4f-df2e-4494-9775-e59359a49379] 2026-04-05 00:02:26.083491 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[4]: Creating... 2026-04-05 00:02:26.089211 | orchestrator | openstack_blockstorage_volume_v3.node_volume[5]: Creation complete after 3s [id=50c87a36-4bc6-4e8b-871c-1038d731a8f6] 2026-04-05 00:02:26.321012 | orchestrator | openstack_blockstorage_volume_v3.node_volume[6]: Creation complete after 3s [id=89f7f52a-567c-4cab-9983-76602271fa86] 2026-04-05 00:02:26.822327 | orchestrator | openstack_blockstorage_volume_v3.manager_base_volume[0]: Creation complete after 4s [id=9454863e-d4fd-4747-9f0b-49c782acb536] 2026-04-05 00:02:27.018766 | orchestrator | openstack_networking_subnet_v2.subnet_management: Creation complete after 1s [id=13e0958d-0294-4d38-9c19-05d6fc1baa00] 2026-04-05 00:02:27.026597 | orchestrator | openstack_networking_router_v2.router: Creating... 2026-04-05 00:02:29.364192 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[1]: Creation complete after 3s [id=cc38e3f2-573f-4547-a204-e1f48ae0a849] 2026-04-05 00:02:29.368023 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[3]: Creation complete after 3s [id=6cf61d5b-3b59-4520-9cc2-8285b407910f] 2026-04-05 00:02:29.415540 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[0]: Creation complete after 3s [id=9d8fe4d0-47c6-47ce-9739-2701ccce9737] 2026-04-05 00:02:29.417325 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[2]: Creation complete after 3s [id=a9214721-eba3-44ac-9648-f7cb9ca525d3] 2026-04-05 00:02:29.427276 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[5]: Creation complete after 3s [id=af4f9d3c-bb5f-4922-ae65-7a1b824f675e] 2026-04-05 00:02:29.650807 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[4]: Creation complete after 4s [id=60b18cd9-cde4-4a47-bd2f-2a39c218ea3e] 2026-04-05 00:02:30.585326 | orchestrator | openstack_networking_router_v2.router: Creation complete after 4s [id=b051418f-a9d9-4c76-a1f0-c9d7c68b0aa2] 2026-04-05 00:02:30.588315 | orchestrator | openstack_networking_router_interface_v2.router_interface: Creating... 2026-04-05 00:02:30.592546 | orchestrator | openstack_networking_secgroup_v2.security_group_node: Creating... 2026-04-05 00:02:30.594394 | orchestrator | openstack_networking_secgroup_v2.security_group_management: Creating... 2026-04-05 00:02:30.838280 | orchestrator | openstack_networking_secgroup_v2.security_group_management: Creation complete after 0s [id=61ef876d-bb29-434b-ba7b-488cc0cf9fdd] 2026-04-05 00:02:30.843927 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule5: Creating... 2026-04-05 00:02:30.847770 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule1: Creating... 2026-04-05 00:02:30.848637 | orchestrator | openstack_networking_secgroup_v2.security_group_node: Creation complete after 0s [id=a5948a76-52ad-4cb7-ae03-bd6db06fd9e7] 2026-04-05 00:02:30.850202 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule4: Creating... 2026-04-05 00:02:30.850985 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule3: Creating... 2026-04-05 00:02:30.851714 | orchestrator | openstack_networking_port_v2.manager_port_management: Creating... 2026-04-05 00:02:30.851846 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule2: Creating... 2026-04-05 00:02:30.862469 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_rule_vrrp: Creating... 2026-04-05 00:02:30.867688 | orchestrator | openstack_networking_port_v2.node_port_management[0]: Creating... 2026-04-05 00:02:30.871634 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule3: Creating... 2026-04-05 00:02:31.104281 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_rule_vrrp: Creation complete after 0s [id=2206b321-a69b-4b7f-b1dd-3a7c197a2e93] 2026-04-05 00:02:31.112666 | orchestrator | openstack_networking_port_v2.node_port_management[3]: Creating... 2026-04-05 00:02:31.292246 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule3: Creation complete after 0s [id=19021041-ef14-4a34-a243-c4b62c5989f3] 2026-04-05 00:02:31.300716 | orchestrator | openstack_networking_port_v2.node_port_management[1]: Creating... 2026-04-05 00:02:31.329045 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule5: Creation complete after 0s [id=184cb270-719a-4789-a833-b637bf6ada89] 2026-04-05 00:02:31.337279 | orchestrator | openstack_networking_port_v2.node_port_management[2]: Creating... 2026-04-05 00:02:31.652044 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule1: Creation complete after 1s [id=fc878745-102f-4ad5-bdbf-f7a55176220d] 2026-04-05 00:02:31.660350 | orchestrator | openstack_networking_port_v2.node_port_management[5]: Creating... 2026-04-05 00:02:31.684312 | orchestrator | openstack_networking_port_v2.node_port_management[0]: Creation complete after 1s [id=f04b57f3-5912-47fc-a04f-bd4082ea3ed8] 2026-04-05 00:02:31.693225 | orchestrator | openstack_networking_port_v2.node_port_management[4]: Creating... 2026-04-05 00:02:31.908800 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule3: Creation complete after 1s [id=60aced75-7b1e-45ec-8d09-254ce16ae78f] 2026-04-05 00:02:31.914302 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule1: Creating... 2026-04-05 00:02:32.015223 | orchestrator | openstack_networking_port_v2.node_port_management[3]: Creation complete after 1s [id=45b161c7-d4c7-4e10-ac2d-f1372a2e6b89] 2026-04-05 00:02:32.020047 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule2: Creating... 2026-04-05 00:02:32.169405 | orchestrator | openstack_networking_port_v2.node_port_management[2]: Creation complete after 1s [id=6a98f27f-1c06-4bd9-a845-3b6170e577cb] 2026-04-05 00:02:32.215286 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule1: Creation complete after 0s [id=54c51861-08e9-46c6-9b12-8eb9cd0572c5] 2026-04-05 00:02:32.437583 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule2: Creation complete after 0s [id=48b5227e-b0a7-40e2-870b-c10c36eb76aa] 2026-04-05 00:02:32.465456 | orchestrator | openstack_networking_port_v2.node_port_management[4]: Creation complete after 0s [id=b73d39c7-effe-4fe0-a755-aa2a39cc6f98] 2026-04-05 00:02:32.574827 | orchestrator | openstack_networking_port_v2.node_port_management[5]: Creation complete after 1s [id=8ab79469-bfbd-4190-b0cb-8ddb2c6a71b1] 2026-04-05 00:02:32.921625 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule2: Creation complete after 2s [id=567547fa-e2ea-4556-b278-b50bb5c21ed6] 2026-04-05 00:02:33.078361 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule4: Creation complete after 2s [id=56f30c95-9ff7-40be-8013-b0affbaad565] 2026-04-05 00:02:33.219943 | orchestrator | openstack_networking_port_v2.manager_port_management: Creation complete after 2s [id=b8a0b284-9bdd-40cb-bd06-90e47090365c] 2026-04-05 00:02:33.424588 | orchestrator | openstack_networking_port_v2.node_port_management[1]: Creation complete after 2s [id=ee0bc8b3-5246-4b3e-97b9-1f196e3281f4] 2026-04-05 00:02:35.194558 | orchestrator | openstack_networking_router_interface_v2.router_interface: Creation complete after 4s [id=e871e497-db6e-45ff-9481-9640001da818] 2026-04-05 00:02:35.215478 | orchestrator | openstack_networking_floatingip_v2.manager_floating_ip: Creating... 2026-04-05 00:02:35.240872 | orchestrator | openstack_compute_instance_v2.node_server[1]: Creating... 2026-04-05 00:02:35.244549 | orchestrator | openstack_compute_instance_v2.node_server[4]: Creating... 2026-04-05 00:02:35.246460 | orchestrator | openstack_compute_instance_v2.node_server[2]: Creating... 2026-04-05 00:02:35.250648 | orchestrator | openstack_compute_instance_v2.node_server[5]: Creating... 2026-04-05 00:02:35.267321 | orchestrator | openstack_compute_instance_v2.node_server[3]: Creating... 2026-04-05 00:02:35.268376 | orchestrator | openstack_compute_instance_v2.node_server[0]: Creating... 2026-04-05 00:02:37.651085 | orchestrator | openstack_networking_floatingip_v2.manager_floating_ip: Creation complete after 3s [id=6902812c-7b94-4c89-92d3-5f0fdce2bada] 2026-04-05 00:02:37.656849 | orchestrator | openstack_networking_floatingip_associate_v2.manager_floating_ip_association: Creating... 2026-04-05 00:02:37.670768 | orchestrator | local_file.MANAGER_ADDRESS: Creating... 2026-04-05 00:02:37.674583 | orchestrator | local_file.MANAGER_ADDRESS: Creation complete after 0s [id=b4ce328fc7308219dd41f49f69c9363473681769] 2026-04-05 00:02:37.675602 | orchestrator | local_file.inventory: Creating... 2026-04-05 00:02:37.678935 | orchestrator | local_file.inventory: Creation complete after 0s [id=41ee9071bae7b16934780421a95d927771d0f052] 2026-04-05 00:02:40.143542 | orchestrator | openstack_networking_floatingip_associate_v2.manager_floating_ip_association: Creation complete after 2s [id=6902812c-7b94-4c89-92d3-5f0fdce2bada] 2026-04-05 00:02:45.243361 | orchestrator | openstack_compute_instance_v2.node_server[1]: Still creating... [10s elapsed] 2026-04-05 00:02:45.245615 | orchestrator | openstack_compute_instance_v2.node_server[4]: Still creating... [10s elapsed] 2026-04-05 00:02:45.250098 | orchestrator | openstack_compute_instance_v2.node_server[2]: Still creating... [10s elapsed] 2026-04-05 00:02:45.251280 | orchestrator | openstack_compute_instance_v2.node_server[5]: Still creating... [10s elapsed] 2026-04-05 00:02:45.268650 | orchestrator | openstack_compute_instance_v2.node_server[3]: Still creating... [10s elapsed] 2026-04-05 00:02:45.268737 | orchestrator | openstack_compute_instance_v2.node_server[0]: Still creating... [10s elapsed] 2026-04-05 00:02:55.252851 | orchestrator | openstack_compute_instance_v2.node_server[2]: Still creating... [20s elapsed] 2026-04-05 00:02:55.253024 | orchestrator | openstack_compute_instance_v2.node_server[1]: Still creating... [20s elapsed] 2026-04-05 00:02:55.253042 | orchestrator | openstack_compute_instance_v2.node_server[5]: Still creating... [20s elapsed] 2026-04-05 00:02:55.253053 | orchestrator | openstack_compute_instance_v2.node_server[4]: Still creating... [20s elapsed] 2026-04-05 00:02:55.269258 | orchestrator | openstack_compute_instance_v2.node_server[0]: Still creating... [20s elapsed] 2026-04-05 00:02:55.269363 | orchestrator | openstack_compute_instance_v2.node_server[3]: Still creating... [20s elapsed] 2026-04-05 00:02:56.095368 | orchestrator | openstack_compute_instance_v2.node_server[3]: Creation complete after 21s [id=e80a9ea2-930e-4223-8303-d429a9b7b62a] 2026-04-05 00:03:05.261775 | orchestrator | openstack_compute_instance_v2.node_server[5]: Still creating... [30s elapsed] 2026-04-05 00:03:05.261882 | orchestrator | openstack_compute_instance_v2.node_server[2]: Still creating... [30s elapsed] 2026-04-05 00:03:05.261898 | orchestrator | openstack_compute_instance_v2.node_server[1]: Still creating... [30s elapsed] 2026-04-05 00:03:05.261941 | orchestrator | openstack_compute_instance_v2.node_server[4]: Still creating... [30s elapsed] 2026-04-05 00:03:05.270218 | orchestrator | openstack_compute_instance_v2.node_server[0]: Still creating... [30s elapsed] 2026-04-05 00:03:06.416111 | orchestrator | openstack_compute_instance_v2.node_server[4]: Creation complete after 31s [id=1ee30789-00ff-4269-9a73-f5251204282d] 2026-04-05 00:03:06.457655 | orchestrator | openstack_compute_instance_v2.node_server[1]: Creation complete after 31s [id=48ebc51a-b7eb-4433-a717-aec385b59303] 2026-04-05 00:03:06.593302 | orchestrator | openstack_compute_instance_v2.node_server[0]: Creation complete after 32s [id=26b797a2-2a38-486d-abed-b24913e38bb2] 2026-04-05 00:03:06.601454 | orchestrator | openstack_compute_instance_v2.node_server[2]: Creation complete after 32s [id=76b82b51-5909-431b-8bc6-3a19c3dab2ba] 2026-04-05 00:03:06.797754 | orchestrator | openstack_compute_instance_v2.node_server[5]: Creation complete after 32s [id=a0eb7f31-e6e8-4a5f-9a43-edbcf3465dd7] 2026-04-05 00:03:06.820149 | orchestrator | null_resource.node_semaphore: Creating... 2026-04-05 00:03:06.824021 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[1]: Creating... 2026-04-05 00:03:06.826823 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[4]: Creating... 2026-04-05 00:03:06.826912 | orchestrator | null_resource.node_semaphore: Creation complete after 0s [id=4483618234495080276] 2026-04-05 00:03:06.827629 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[6]: Creating... 2026-04-05 00:03:06.830981 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[2]: Creating... 2026-04-05 00:03:06.831213 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[0]: Creating... 2026-04-05 00:03:06.831675 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[8]: Creating... 2026-04-05 00:03:06.836485 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[5]: Creating... 2026-04-05 00:03:06.838687 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[3]: Creating... 2026-04-05 00:03:06.851857 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[7]: Creating... 2026-04-05 00:03:06.867648 | orchestrator | openstack_compute_instance_v2.manager_server: Creating... 2026-04-05 00:03:10.279221 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[6]: Creation complete after 3s [id=e80a9ea2-930e-4223-8303-d429a9b7b62a/89f7f52a-567c-4cab-9983-76602271fa86] 2026-04-05 00:03:10.298322 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[1]: Creation complete after 3s [id=1ee30789-00ff-4269-9a73-f5251204282d/ca139ca2-9428-4862-b2c5-b387113f92e8] 2026-04-05 00:03:10.303416 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[8]: Creation complete after 3s [id=a0eb7f31-e6e8-4a5f-9a43-edbcf3465dd7/f126a5b7-c683-4f0b-86b6-1ac9b9bcd2da] 2026-04-05 00:03:10.323680 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[3]: Creation complete after 3s [id=e80a9ea2-930e-4223-8303-d429a9b7b62a/98068efd-febf-4a3d-a208-2ec8969defa3] 2026-04-05 00:03:10.332795 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[4]: Creation complete after 3s [id=1ee30789-00ff-4269-9a73-f5251204282d/38b6e962-bf0a-4437-92be-df56b43fc17a] 2026-04-05 00:03:10.345137 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[2]: Creation complete after 3s [id=a0eb7f31-e6e8-4a5f-9a43-edbcf3465dd7/16d4ab4f-df2e-4494-9775-e59359a49379] 2026-04-05 00:03:16.424137 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[7]: Creation complete after 9s [id=1ee30789-00ff-4269-9a73-f5251204282d/cd3e0233-fa53-4a76-8124-17084efe5189] 2026-04-05 00:03:16.438413 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[5]: Creation complete after 9s [id=a0eb7f31-e6e8-4a5f-9a43-edbcf3465dd7/50c87a36-4bc6-4e8b-871c-1038d731a8f6] 2026-04-05 00:03:16.456484 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[0]: Creation complete after 9s [id=e80a9ea2-930e-4223-8303-d429a9b7b62a/7e73ac44-76fe-4853-8c7e-76a35261b68e] 2026-04-05 00:03:16.870092 | orchestrator | openstack_compute_instance_v2.manager_server: Still creating... [10s elapsed] 2026-04-05 00:03:26.870723 | orchestrator | openstack_compute_instance_v2.manager_server: Still creating... [20s elapsed] 2026-04-05 00:03:27.248109 | orchestrator | openstack_compute_instance_v2.manager_server: Creation complete after 20s [id=32820476-8de1-4319-946c-66bd74f982f3] 2026-04-05 00:03:27.265990 | orchestrator | 2026-04-05 00:03:27.266105 | orchestrator | Apply complete! Resources: 64 added, 0 changed, 0 destroyed. 2026-04-05 00:03:27.266121 | orchestrator | 2026-04-05 00:03:27.266130 | orchestrator | Outputs: 2026-04-05 00:03:27.266139 | orchestrator | 2026-04-05 00:03:27.266147 | orchestrator | manager_address = 2026-04-05 00:03:27.266156 | orchestrator | private_key = 2026-04-05 00:03:27.422629 | orchestrator | ok: Runtime: 0:01:16.446262 2026-04-05 00:03:27.459080 | 2026-04-05 00:03:27.459213 | TASK [Create infrastructure (stable)] 2026-04-05 00:03:27.995229 | orchestrator | skipping: Conditional result was False 2026-04-05 00:03:28.005073 | 2026-04-05 00:03:28.005199 | TASK [Fetch manager address] 2026-04-05 00:03:28.449035 | orchestrator | ok 2026-04-05 00:03:28.463213 | 2026-04-05 00:03:28.463476 | TASK [Set manager_host address] 2026-04-05 00:03:28.546949 | orchestrator | ok 2026-04-05 00:03:28.558472 | 2026-04-05 00:03:28.558600 | LOOP [Update ansible collections] 2026-04-05 00:03:29.331006 | orchestrator | [WARNING]: Collection osism.services does not support Ansible version 2.15.2 2026-04-05 00:03:29.331279 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2026-04-05 00:03:29.331320 | orchestrator | Starting galaxy collection install process 2026-04-05 00:03:29.331344 | orchestrator | Process install dependency map 2026-04-05 00:03:29.331366 | orchestrator | Starting collection install process 2026-04-05 00:03:29.331386 | orchestrator | Installing 'osism.commons:999.0.0' to '/home/zuul-testbed04/.ansible/collections/ansible_collections/osism/commons' 2026-04-05 00:03:29.331440 | orchestrator | Created collection for osism.commons:999.0.0 at /home/zuul-testbed04/.ansible/collections/ansible_collections/osism/commons 2026-04-05 00:03:29.331479 | orchestrator | osism.commons:999.0.0 was installed successfully 2026-04-05 00:03:29.331526 | orchestrator | ok: Item: commons Runtime: 0:00:00.465481 2026-04-05 00:03:30.121979 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2026-04-05 00:03:30.122110 | orchestrator | [WARNING]: Collection osism.services does not support Ansible version 2.15.2 2026-04-05 00:03:30.122141 | orchestrator | Starting galaxy collection install process 2026-04-05 00:03:30.122164 | orchestrator | Process install dependency map 2026-04-05 00:03:30.122185 | orchestrator | Starting collection install process 2026-04-05 00:03:30.122205 | orchestrator | Installing 'osism.services:999.0.0' to '/home/zuul-testbed04/.ansible/collections/ansible_collections/osism/services' 2026-04-05 00:03:30.122225 | orchestrator | Created collection for osism.services:999.0.0 at /home/zuul-testbed04/.ansible/collections/ansible_collections/osism/services 2026-04-05 00:03:30.122246 | orchestrator | osism.services:999.0.0 was installed successfully 2026-04-05 00:03:30.122279 | orchestrator | ok: Item: services Runtime: 0:00:00.523500 2026-04-05 00:03:30.139834 | 2026-04-05 00:03:30.139989 | TASK [Wait up to 300 seconds for port 22 to become open and contain "OpenSSH"] 2026-04-05 00:03:40.705909 | orchestrator | ok 2026-04-05 00:03:40.714579 | 2026-04-05 00:03:40.714686 | TASK [Wait a little longer for the manager so that everything is ready] 2026-04-05 00:04:40.755168 | orchestrator | ok 2026-04-05 00:04:40.763224 | 2026-04-05 00:04:40.763331 | TASK [Fetch manager ssh hostkey] 2026-04-05 00:04:42.336727 | orchestrator | Output suppressed because no_log was given 2026-04-05 00:04:42.354571 | 2026-04-05 00:04:42.354729 | TASK [Get ssh keypair from terraform environment] 2026-04-05 00:04:42.890505 | orchestrator | ok: Runtime: 0:00:00.011099 2026-04-05 00:04:42.905952 | 2026-04-05 00:04:42.906102 | TASK [Point out that the following task takes some time and does not give any output] 2026-04-05 00:04:42.957070 | orchestrator | ok: The task 'Run manager part 0' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minutes for this task to complete. 2026-04-05 00:04:42.966302 | 2026-04-05 00:04:42.966464 | TASK [Run manager part 0] 2026-04-05 00:04:43.872101 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2026-04-05 00:04:43.930713 | orchestrator | 2026-04-05 00:04:43.930766 | orchestrator | PLAY [Wait for cloud-init to finish] ******************************************* 2026-04-05 00:04:43.930775 | orchestrator | 2026-04-05 00:04:43.930802 | orchestrator | TASK [Check /var/lib/cloud/instance/boot-finished] ***************************** 2026-04-05 00:04:46.360420 | orchestrator | ok: [testbed-manager] 2026-04-05 00:04:46.360483 | orchestrator | 2026-04-05 00:04:46.360515 | orchestrator | PLAY [Run manager part 0] ****************************************************** 2026-04-05 00:04:46.360529 | orchestrator | 2026-04-05 00:04:46.360542 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-04-05 00:04:48.320517 | orchestrator | ok: [testbed-manager] 2026-04-05 00:04:48.320574 | orchestrator | 2026-04-05 00:04:48.320581 | orchestrator | TASK [Get home directory of ansible user] ************************************** 2026-04-05 00:04:48.998627 | orchestrator | ok: [testbed-manager] 2026-04-05 00:04:48.998679 | orchestrator | 2026-04-05 00:04:48.998687 | orchestrator | TASK [Set repo_path fact] ****************************************************** 2026-04-05 00:04:49.049614 | orchestrator | skipping: [testbed-manager] 2026-04-05 00:04:49.049663 | orchestrator | 2026-04-05 00:04:49.049674 | orchestrator | TASK [Fail if Ubuntu version is lower than 24.04] ****************************** 2026-04-05 00:04:49.087096 | orchestrator | skipping: [testbed-manager] 2026-04-05 00:04:49.087165 | orchestrator | 2026-04-05 00:04:49.087179 | orchestrator | TASK [Fail if Debian version is lower than 12] ********************************* 2026-04-05 00:04:49.123691 | orchestrator | skipping: [testbed-manager] 2026-04-05 00:04:49.123750 | orchestrator | 2026-04-05 00:04:49.123759 | orchestrator | TASK [Set APT options on manager] ********************************************** 2026-04-05 00:04:49.874245 | orchestrator | changed: [testbed-manager] 2026-04-05 00:04:49.874329 | orchestrator | 2026-04-05 00:04:49.874344 | orchestrator | TASK [Update APT cache and run dist-upgrade] *********************************** 2026-04-05 00:08:17.364058 | orchestrator | changed: [testbed-manager] 2026-04-05 00:08:17.364162 | orchestrator | 2026-04-05 00:08:17.364175 | orchestrator | TASK [Install HWE kernel package on Ubuntu] ************************************ 2026-04-05 00:09:47.044747 | orchestrator | changed: [testbed-manager] 2026-04-05 00:09:47.044803 | orchestrator | 2026-04-05 00:09:47.044821 | orchestrator | TASK [Install required packages] *********************************************** 2026-04-05 00:10:13.132998 | orchestrator | changed: [testbed-manager] 2026-04-05 00:10:13.133137 | orchestrator | 2026-04-05 00:10:13.133157 | orchestrator | TASK [Remove some python packages] ********************************************* 2026-04-05 00:10:23.378861 | orchestrator | changed: [testbed-manager] 2026-04-05 00:10:23.378902 | orchestrator | 2026-04-05 00:10:23.378910 | orchestrator | TASK [Set venv_command fact (Debian)] ****************************************** 2026-04-05 00:10:23.426605 | orchestrator | ok: [testbed-manager] 2026-04-05 00:10:23.426695 | orchestrator | 2026-04-05 00:10:23.426713 | orchestrator | TASK [Get current user] ******************************************************** 2026-04-05 00:10:24.260815 | orchestrator | ok: [testbed-manager] 2026-04-05 00:10:24.260850 | orchestrator | 2026-04-05 00:10:24.260856 | orchestrator | TASK [Create venv directory] *************************************************** 2026-04-05 00:10:25.122068 | orchestrator | changed: [testbed-manager] 2026-04-05 00:10:25.122325 | orchestrator | 2026-04-05 00:10:25.122349 | orchestrator | TASK [Install netaddr in venv] ************************************************* 2026-04-05 00:10:31.747303 | orchestrator | changed: [testbed-manager] 2026-04-05 00:10:31.747370 | orchestrator | 2026-04-05 00:10:31.747377 | orchestrator | TASK [Install ansible-core in venv] ******************************************** 2026-04-05 00:10:38.131040 | orchestrator | changed: [testbed-manager] 2026-04-05 00:10:38.131141 | orchestrator | 2026-04-05 00:10:38.131158 | orchestrator | TASK [Install requests >= 2.32.2] ********************************************** 2026-04-05 00:10:40.895067 | orchestrator | changed: [testbed-manager] 2026-04-05 00:10:40.895165 | orchestrator | 2026-04-05 00:10:40.895181 | orchestrator | TASK [Install docker >= 7.1.0] ************************************************* 2026-04-05 00:10:42.732507 | orchestrator | changed: [testbed-manager] 2026-04-05 00:10:42.732626 | orchestrator | 2026-04-05 00:10:42.732915 | orchestrator | TASK [Create directories in /opt/src] ****************************************** 2026-04-05 00:10:43.891391 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-commons) 2026-04-05 00:10:43.891536 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-services) 2026-04-05 00:10:43.891555 | orchestrator | 2026-04-05 00:10:43.891572 | orchestrator | TASK [Sync sources in /opt/src] ************************************************ 2026-04-05 00:10:43.939027 | orchestrator | [DEPRECATION WARNING]: The connection's stdin object is deprecated. Call 2026-04-05 00:10:43.939076 | orchestrator | display.prompt_until(msg) instead. This feature will be removed in version 2026-04-05 00:10:43.939081 | orchestrator | 2.19. Deprecation warnings can be disabled by setting 2026-04-05 00:10:43.939087 | orchestrator | deprecation_warnings=False in ansible.cfg. 2026-04-05 00:10:47.180831 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-commons) 2026-04-05 00:10:47.180921 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-services) 2026-04-05 00:10:47.180931 | orchestrator | 2026-04-05 00:10:47.180939 | orchestrator | TASK [Create /usr/share/ansible directory] ************************************* 2026-04-05 00:10:47.737340 | orchestrator | changed: [testbed-manager] 2026-04-05 00:10:47.737433 | orchestrator | 2026-04-05 00:10:47.737450 | orchestrator | TASK [Install collections from Ansible galaxy] ********************************* 2026-04-05 00:13:09.056137 | orchestrator | changed: [testbed-manager] => (item=ansible.netcommon) 2026-04-05 00:13:09.056200 | orchestrator | changed: [testbed-manager] => (item=ansible.posix) 2026-04-05 00:13:09.056213 | orchestrator | changed: [testbed-manager] => (item=community.docker>=3.10.2) 2026-04-05 00:13:09.056219 | orchestrator | 2026-04-05 00:13:09.056225 | orchestrator | TASK [Install local collections] *********************************************** 2026-04-05 00:13:11.478431 | orchestrator | changed: [testbed-manager] => (item=ansible-collection-commons) 2026-04-05 00:13:11.479215 | orchestrator | changed: [testbed-manager] => (item=ansible-collection-services) 2026-04-05 00:13:11.479275 | orchestrator | 2026-04-05 00:13:11.479294 | orchestrator | PLAY [Create operator user] **************************************************** 2026-04-05 00:13:11.479311 | orchestrator | 2026-04-05 00:13:11.479324 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-04-05 00:13:12.935655 | orchestrator | ok: [testbed-manager] 2026-04-05 00:13:12.935742 | orchestrator | 2026-04-05 00:13:12.935769 | orchestrator | TASK [osism.commons.operator : Gather variables for each operating system] ***** 2026-04-05 00:13:12.981547 | orchestrator | ok: [testbed-manager] 2026-04-05 00:13:12.981585 | orchestrator | 2026-04-05 00:13:12.981592 | orchestrator | TASK [osism.commons.operator : Set operator_groups variable to default value] *** 2026-04-05 00:13:13.049432 | orchestrator | ok: [testbed-manager] 2026-04-05 00:13:13.049472 | orchestrator | 2026-04-05 00:13:13.049480 | orchestrator | TASK [osism.commons.operator : Create operator group] ************************** 2026-04-05 00:13:13.869325 | orchestrator | changed: [testbed-manager] 2026-04-05 00:13:13.869410 | orchestrator | 2026-04-05 00:13:13.869436 | orchestrator | TASK [osism.commons.operator : Create user] ************************************ 2026-04-05 00:13:14.618893 | orchestrator | changed: [testbed-manager] 2026-04-05 00:13:14.618948 | orchestrator | 2026-04-05 00:13:14.618960 | orchestrator | TASK [osism.commons.operator : Add user to additional groups] ****************** 2026-04-05 00:13:16.051726 | orchestrator | changed: [testbed-manager] => (item=adm) 2026-04-05 00:13:16.051763 | orchestrator | changed: [testbed-manager] => (item=sudo) 2026-04-05 00:13:16.051770 | orchestrator | 2026-04-05 00:13:16.051926 | orchestrator | TASK [osism.commons.operator : Copy user sudoers file] ************************* 2026-04-05 00:13:17.510183 | orchestrator | changed: [testbed-manager] 2026-04-05 00:13:17.510276 | orchestrator | 2026-04-05 00:13:17.510293 | orchestrator | TASK [osism.commons.operator : Set language variables in .bashrc configuration file] *** 2026-04-05 00:13:19.356559 | orchestrator | changed: [testbed-manager] => (item=export LANGUAGE=C.UTF-8) 2026-04-05 00:13:19.356610 | orchestrator | changed: [testbed-manager] => (item=export LANG=C.UTF-8) 2026-04-05 00:13:19.356624 | orchestrator | changed: [testbed-manager] => (item=export LC_ALL=C.UTF-8) 2026-04-05 00:13:19.356630 | orchestrator | 2026-04-05 00:13:19.356636 | orchestrator | TASK [osism.commons.operator : Set custom environment variables in .bashrc configuration file] *** 2026-04-05 00:13:19.416603 | orchestrator | skipping: [testbed-manager] 2026-04-05 00:13:19.416669 | orchestrator | 2026-04-05 00:13:19.416681 | orchestrator | TASK [osism.commons.operator : Set custom PS1 prompt in .bashrc configuration file] *** 2026-04-05 00:13:19.503644 | orchestrator | skipping: [testbed-manager] 2026-04-05 00:13:19.503693 | orchestrator | 2026-04-05 00:13:19.503699 | orchestrator | TASK [osism.commons.operator : Create .ssh directory] ************************** 2026-04-05 00:13:20.102650 | orchestrator | changed: [testbed-manager] 2026-04-05 00:13:20.102739 | orchestrator | 2026-04-05 00:13:20.102756 | orchestrator | TASK [osism.commons.operator : Check number of SSH authorized keys] ************ 2026-04-05 00:13:20.176631 | orchestrator | skipping: [testbed-manager] 2026-04-05 00:13:20.176710 | orchestrator | 2026-04-05 00:13:20.176721 | orchestrator | TASK [osism.commons.operator : Set ssh authorized keys] ************************ 2026-04-05 00:13:21.062401 | orchestrator | changed: [testbed-manager] => (item=None) 2026-04-05 00:13:21.062490 | orchestrator | changed: [testbed-manager] 2026-04-05 00:13:21.062505 | orchestrator | 2026-04-05 00:13:21.062518 | orchestrator | TASK [osism.commons.operator : Delete ssh authorized keys] ********************* 2026-04-05 00:13:21.099641 | orchestrator | skipping: [testbed-manager] 2026-04-05 00:13:21.099706 | orchestrator | 2026-04-05 00:13:21.099714 | orchestrator | TASK [osism.commons.operator : Set authorized GitHub accounts] ***************** 2026-04-05 00:13:21.130987 | orchestrator | skipping: [testbed-manager] 2026-04-05 00:13:21.131048 | orchestrator | 2026-04-05 00:13:21.131056 | orchestrator | TASK [osism.commons.operator : Delete authorized GitHub accounts] ************** 2026-04-05 00:13:21.169881 | orchestrator | skipping: [testbed-manager] 2026-04-05 00:13:21.170058 | orchestrator | 2026-04-05 00:13:21.170071 | orchestrator | TASK [osism.commons.operator : Set password] *********************************** 2026-04-05 00:13:21.246477 | orchestrator | skipping: [testbed-manager] 2026-04-05 00:13:21.246571 | orchestrator | 2026-04-05 00:13:21.246587 | orchestrator | TASK [osism.commons.operator : Unset & lock password] ************************** 2026-04-05 00:13:22.000719 | orchestrator | ok: [testbed-manager] 2026-04-05 00:13:22.001530 | orchestrator | 2026-04-05 00:13:22.001563 | orchestrator | PLAY [Run manager part 0] ****************************************************** 2026-04-05 00:13:22.001575 | orchestrator | 2026-04-05 00:13:22.001588 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-04-05 00:13:23.405159 | orchestrator | ok: [testbed-manager] 2026-04-05 00:13:23.405206 | orchestrator | 2026-04-05 00:13:23.405211 | orchestrator | TASK [Recursively change ownership of /opt/venv] ******************************* 2026-04-05 00:13:24.396964 | orchestrator | changed: [testbed-manager] 2026-04-05 00:13:24.397016 | orchestrator | 2026-04-05 00:13:24.397022 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-05 00:13:24.397028 | orchestrator | testbed-manager : ok=33 changed=23 unreachable=0 failed=0 skipped=10 rescued=0 ignored=0 2026-04-05 00:13:24.397032 | orchestrator | 2026-04-05 00:13:24.822680 | orchestrator | ok: Runtime: 0:08:41.233604 2026-04-05 00:13:24.840809 | 2026-04-05 00:13:24.840963 | TASK [Point out that the log in on the manager is now possible] 2026-04-05 00:13:24.888804 | orchestrator | ok: It is now already possible to log in to the manager with 'make login'. 2026-04-05 00:13:24.899417 | 2026-04-05 00:13:24.899577 | TASK [Point out that the following task takes some time and does not give any output] 2026-04-05 00:13:24.940001 | orchestrator | ok: The task 'Run manager part 1 + 2' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minuts for this task to complete. 2026-04-05 00:13:24.950261 | 2026-04-05 00:13:24.950423 | TASK [Run manager part 1 + 2] 2026-04-05 00:13:25.785868 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2026-04-05 00:13:25.842259 | orchestrator | 2026-04-05 00:13:25.842305 | orchestrator | PLAY [Run manager part 1] ****************************************************** 2026-04-05 00:13:25.842313 | orchestrator | 2026-04-05 00:13:25.842326 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-04-05 00:13:28.977337 | orchestrator | ok: [testbed-manager] 2026-04-05 00:13:28.977387 | orchestrator | 2026-04-05 00:13:28.977413 | orchestrator | TASK [Set venv_command fact (RedHat)] ****************************************** 2026-04-05 00:13:29.024279 | orchestrator | skipping: [testbed-manager] 2026-04-05 00:13:29.024337 | orchestrator | 2026-04-05 00:13:29.024349 | orchestrator | TASK [Set venv_command fact (Debian)] ****************************************** 2026-04-05 00:13:29.074768 | orchestrator | ok: [testbed-manager] 2026-04-05 00:13:29.074840 | orchestrator | 2026-04-05 00:13:29.074852 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2026-04-05 00:13:29.127556 | orchestrator | ok: [testbed-manager] 2026-04-05 00:13:29.127614 | orchestrator | 2026-04-05 00:13:29.127626 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2026-04-05 00:13:29.201150 | orchestrator | ok: [testbed-manager] 2026-04-05 00:13:29.201204 | orchestrator | 2026-04-05 00:13:29.201214 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2026-04-05 00:13:29.273002 | orchestrator | ok: [testbed-manager] 2026-04-05 00:13:29.273051 | orchestrator | 2026-04-05 00:13:29.273059 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2026-04-05 00:13:29.316560 | orchestrator | included: /home/zuul-testbed04/.ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-manager 2026-04-05 00:13:29.316606 | orchestrator | 2026-04-05 00:13:29.316612 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2026-04-05 00:13:30.075034 | orchestrator | ok: [testbed-manager] 2026-04-05 00:13:30.075087 | orchestrator | 2026-04-05 00:13:30.075096 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2026-04-05 00:13:30.122534 | orchestrator | skipping: [testbed-manager] 2026-04-05 00:13:30.122579 | orchestrator | 2026-04-05 00:13:30.122585 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2026-04-05 00:13:31.540509 | orchestrator | changed: [testbed-manager] 2026-04-05 00:13:31.540616 | orchestrator | 2026-04-05 00:13:31.540637 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2026-04-05 00:13:32.156914 | orchestrator | ok: [testbed-manager] 2026-04-05 00:13:32.157016 | orchestrator | 2026-04-05 00:13:32.157040 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2026-04-05 00:13:33.371786 | orchestrator | changed: [testbed-manager] 2026-04-05 00:13:33.371883 | orchestrator | 2026-04-05 00:13:33.371901 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2026-04-05 00:13:49.347228 | orchestrator | changed: [testbed-manager] 2026-04-05 00:13:49.347332 | orchestrator | 2026-04-05 00:13:49.347351 | orchestrator | TASK [Get home directory of ansible user] ************************************** 2026-04-05 00:13:50.077044 | orchestrator | ok: [testbed-manager] 2026-04-05 00:13:50.078085 | orchestrator | 2026-04-05 00:13:50.078132 | orchestrator | TASK [Set repo_path fact] ****************************************************** 2026-04-05 00:13:50.135248 | orchestrator | skipping: [testbed-manager] 2026-04-05 00:13:50.135305 | orchestrator | 2026-04-05 00:13:50.135312 | orchestrator | TASK [Copy SSH public key] ***************************************************** 2026-04-05 00:13:51.124966 | orchestrator | changed: [testbed-manager] 2026-04-05 00:13:51.125024 | orchestrator | 2026-04-05 00:13:51.125033 | orchestrator | TASK [Copy SSH private key] **************************************************** 2026-04-05 00:13:52.168249 | orchestrator | changed: [testbed-manager] 2026-04-05 00:13:52.168321 | orchestrator | 2026-04-05 00:13:52.168338 | orchestrator | TASK [Create configuration directory] ****************************************** 2026-04-05 00:13:52.749331 | orchestrator | changed: [testbed-manager] 2026-04-05 00:13:52.749431 | orchestrator | 2026-04-05 00:13:52.749454 | orchestrator | TASK [Copy testbed repo] ******************************************************* 2026-04-05 00:13:52.789227 | orchestrator | [DEPRECATION WARNING]: The connection's stdin object is deprecated. Call 2026-04-05 00:13:52.789342 | orchestrator | display.prompt_until(msg) instead. This feature will be removed in version 2026-04-05 00:13:52.789359 | orchestrator | 2.19. Deprecation warnings can be disabled by setting 2026-04-05 00:13:52.789371 | orchestrator | deprecation_warnings=False in ansible.cfg. 2026-04-05 00:13:54.951966 | orchestrator | changed: [testbed-manager] 2026-04-05 00:13:54.952072 | orchestrator | 2026-04-05 00:13:54.952090 | orchestrator | TASK [Install python requirements in venv] ************************************* 2026-04-05 00:14:04.179016 | orchestrator | ok: [testbed-manager] => (item=Jinja2) 2026-04-05 00:14:04.179109 | orchestrator | ok: [testbed-manager] => (item=PyYAML) 2026-04-05 00:14:04.179125 | orchestrator | ok: [testbed-manager] => (item=packaging) 2026-04-05 00:14:04.179136 | orchestrator | changed: [testbed-manager] => (item=python-gilt==1.2.3) 2026-04-05 00:14:04.179154 | orchestrator | ok: [testbed-manager] => (item=requests>=2.32.2) 2026-04-05 00:14:04.179165 | orchestrator | ok: [testbed-manager] => (item=docker>=7.1.0) 2026-04-05 00:14:04.179175 | orchestrator | 2026-04-05 00:14:04.179186 | orchestrator | TASK [Copy testbed custom CA certificate on Debian/Ubuntu] ********************* 2026-04-05 00:14:05.283039 | orchestrator | changed: [testbed-manager] 2026-04-05 00:14:05.283083 | orchestrator | 2026-04-05 00:14:05.283092 | orchestrator | TASK [Run update-ca-certificates on Debian/Ubuntu] ***************************** 2026-04-05 00:14:08.541591 | orchestrator | changed: [testbed-manager] 2026-04-05 00:14:08.541656 | orchestrator | 2026-04-05 00:14:08.541666 | orchestrator | TASK [Run update-ca-trust on RedHat] ******************************************* 2026-04-05 00:14:08.584171 | orchestrator | skipping: [testbed-manager] 2026-04-05 00:14:08.584261 | orchestrator | 2026-04-05 00:14:08.584278 | orchestrator | TASK [Run manager part 2] ****************************************************** 2026-04-05 00:15:49.517580 | orchestrator | changed: [testbed-manager] 2026-04-05 00:15:49.517620 | orchestrator | 2026-04-05 00:15:49.517626 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2026-04-05 00:15:50.789206 | orchestrator | ok: [testbed-manager] 2026-04-05 00:15:50.789314 | orchestrator | 2026-04-05 00:15:50.789338 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-05 00:15:50.789353 | orchestrator | testbed-manager : ok=21 changed=11 unreachable=0 failed=0 skipped=4 rescued=0 ignored=0 2026-04-05 00:15:50.789365 | orchestrator | 2026-04-05 00:15:51.131137 | orchestrator | ok: Runtime: 0:02:25.653255 2026-04-05 00:15:51.149608 | 2026-04-05 00:15:51.149755 | TASK [Reboot manager] 2026-04-05 00:15:52.687027 | orchestrator | ok: Runtime: 0:00:01.015813 2026-04-05 00:15:52.703665 | 2026-04-05 00:15:52.703832 | TASK [Wait up to 300 seconds for port 22 to become open and contain "OpenSSH"] 2026-04-05 00:16:09.189818 | orchestrator | ok 2026-04-05 00:16:09.201208 | 2026-04-05 00:16:09.201350 | TASK [Wait a little longer for the manager so that everything is ready] 2026-04-05 00:17:09.250925 | orchestrator | ok 2026-04-05 00:17:09.260371 | 2026-04-05 00:17:09.260501 | TASK [Deploy manager + bootstrap nodes] 2026-04-05 00:17:11.948869 | orchestrator | 2026-04-05 00:17:11.949038 | orchestrator | # DEPLOY MANAGER 2026-04-05 00:17:11.949049 | orchestrator | 2026-04-05 00:17:11.949062 | orchestrator | + set -e 2026-04-05 00:17:11.949067 | orchestrator | + echo 2026-04-05 00:17:11.949073 | orchestrator | + echo '# DEPLOY MANAGER' 2026-04-05 00:17:11.949085 | orchestrator | + echo 2026-04-05 00:17:11.949122 | orchestrator | + cat /opt/manager-vars.sh 2026-04-05 00:17:11.950254 | orchestrator | export NUMBER_OF_NODES=6 2026-04-05 00:17:11.950270 | orchestrator | 2026-04-05 00:17:11.950275 | orchestrator | export CEPH_VERSION=reef 2026-04-05 00:17:11.950280 | orchestrator | export CONFIGURATION_VERSION=main 2026-04-05 00:17:11.950285 | orchestrator | export MANAGER_VERSION=latest 2026-04-05 00:17:11.950303 | orchestrator | export OPENSTACK_VERSION=2024.2 2026-04-05 00:17:11.950307 | orchestrator | 2026-04-05 00:17:11.950320 | orchestrator | export ARA=false 2026-04-05 00:17:11.950325 | orchestrator | export DEPLOY_MODE=manager 2026-04-05 00:17:11.950338 | orchestrator | export TEMPEST=true 2026-04-05 00:17:11.950342 | orchestrator | export IS_ZUUL=true 2026-04-05 00:17:11.950346 | orchestrator | 2026-04-05 00:17:11.950353 | orchestrator | export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.221 2026-04-05 00:17:11.950358 | orchestrator | export EXTERNAL_API=false 2026-04-05 00:17:11.950362 | orchestrator | 2026-04-05 00:17:11.950366 | orchestrator | export IMAGE_USER=ubuntu 2026-04-05 00:17:11.950372 | orchestrator | export IMAGE_NODE_USER=ubuntu 2026-04-05 00:17:11.950376 | orchestrator | 2026-04-05 00:17:11.950380 | orchestrator | export CEPH_STACK=ceph-ansible 2026-04-05 00:17:11.950387 | orchestrator | 2026-04-05 00:17:11.950392 | orchestrator | + echo 2026-04-05 00:17:11.950397 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-04-05 00:17:11.951777 | orchestrator | ++ export INTERACTIVE=false 2026-04-05 00:17:11.951788 | orchestrator | ++ INTERACTIVE=false 2026-04-05 00:17:11.951792 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-04-05 00:17:11.951796 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-04-05 00:17:11.952022 | orchestrator | + source /opt/manager-vars.sh 2026-04-05 00:17:11.952084 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-04-05 00:17:11.952090 | orchestrator | ++ NUMBER_OF_NODES=6 2026-04-05 00:17:11.952094 | orchestrator | ++ export CEPH_VERSION=reef 2026-04-05 00:17:11.952098 | orchestrator | ++ CEPH_VERSION=reef 2026-04-05 00:17:11.952102 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-04-05 00:17:11.952373 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-04-05 00:17:11.952380 | orchestrator | ++ export MANAGER_VERSION=latest 2026-04-05 00:17:11.952384 | orchestrator | ++ MANAGER_VERSION=latest 2026-04-05 00:17:11.952388 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-04-05 00:17:11.952415 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-04-05 00:17:11.952419 | orchestrator | ++ export ARA=false 2026-04-05 00:17:11.952423 | orchestrator | ++ ARA=false 2026-04-05 00:17:11.952427 | orchestrator | ++ export DEPLOY_MODE=manager 2026-04-05 00:17:11.952431 | orchestrator | ++ DEPLOY_MODE=manager 2026-04-05 00:17:11.952435 | orchestrator | ++ export TEMPEST=true 2026-04-05 00:17:11.952439 | orchestrator | ++ TEMPEST=true 2026-04-05 00:17:11.952443 | orchestrator | ++ export IS_ZUUL=true 2026-04-05 00:17:11.952446 | orchestrator | ++ IS_ZUUL=true 2026-04-05 00:17:11.952450 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.221 2026-04-05 00:17:11.952454 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.221 2026-04-05 00:17:11.952458 | orchestrator | ++ export EXTERNAL_API=false 2026-04-05 00:17:11.952462 | orchestrator | ++ EXTERNAL_API=false 2026-04-05 00:17:11.952468 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-04-05 00:17:11.952472 | orchestrator | ++ IMAGE_USER=ubuntu 2026-04-05 00:17:11.952476 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-04-05 00:17:11.952480 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-04-05 00:17:11.952484 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-04-05 00:17:11.952488 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-04-05 00:17:11.952561 | orchestrator | + sudo ln -sf /opt/configuration/contrib/semver2.sh /usr/local/bin/semver 2026-04-05 00:17:12.013681 | orchestrator | + docker version 2026-04-05 00:17:12.132953 | orchestrator | Client: Docker Engine - Community 2026-04-05 00:17:12.133028 | orchestrator | Version: 27.5.1 2026-04-05 00:17:12.133034 | orchestrator | API version: 1.47 2026-04-05 00:17:12.133041 | orchestrator | Go version: go1.22.11 2026-04-05 00:17:12.133045 | orchestrator | Git commit: 9f9e405 2026-04-05 00:17:12.133049 | orchestrator | Built: Wed Jan 22 13:41:48 2025 2026-04-05 00:17:12.133054 | orchestrator | OS/Arch: linux/amd64 2026-04-05 00:17:12.133058 | orchestrator | Context: default 2026-04-05 00:17:12.133062 | orchestrator | 2026-04-05 00:17:12.133066 | orchestrator | Server: Docker Engine - Community 2026-04-05 00:17:12.133070 | orchestrator | Engine: 2026-04-05 00:17:12.133074 | orchestrator | Version: 27.5.1 2026-04-05 00:17:12.133079 | orchestrator | API version: 1.47 (minimum version 1.24) 2026-04-05 00:17:12.133143 | orchestrator | Go version: go1.22.11 2026-04-05 00:17:12.133147 | orchestrator | Git commit: 4c9b3b0 2026-04-05 00:17:12.133152 | orchestrator | Built: Wed Jan 22 13:41:48 2025 2026-04-05 00:17:12.133155 | orchestrator | OS/Arch: linux/amd64 2026-04-05 00:17:12.133159 | orchestrator | Experimental: false 2026-04-05 00:17:12.133163 | orchestrator | containerd: 2026-04-05 00:17:12.133175 | orchestrator | Version: v2.2.2 2026-04-05 00:17:12.133179 | orchestrator | GitCommit: 301b2dac98f15c27117da5c8af12118a041a31d9 2026-04-05 00:17:12.133183 | orchestrator | runc: 2026-04-05 00:17:12.133187 | orchestrator | Version: 1.3.4 2026-04-05 00:17:12.133191 | orchestrator | GitCommit: v1.3.4-0-gd6d73eb8 2026-04-05 00:17:12.133195 | orchestrator | docker-init: 2026-04-05 00:17:12.133199 | orchestrator | Version: 0.19.0 2026-04-05 00:17:12.133204 | orchestrator | GitCommit: de40ad0 2026-04-05 00:17:12.136705 | orchestrator | + sh -c /opt/configuration/scripts/deploy/000-manager.sh 2026-04-05 00:17:12.147489 | orchestrator | + set -e 2026-04-05 00:17:12.147513 | orchestrator | + source /opt/manager-vars.sh 2026-04-05 00:17:12.147520 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-04-05 00:17:12.147528 | orchestrator | ++ NUMBER_OF_NODES=6 2026-04-05 00:17:12.147532 | orchestrator | ++ export CEPH_VERSION=reef 2026-04-05 00:17:12.147536 | orchestrator | ++ CEPH_VERSION=reef 2026-04-05 00:17:12.147540 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-04-05 00:17:12.147545 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-04-05 00:17:12.147549 | orchestrator | ++ export MANAGER_VERSION=latest 2026-04-05 00:17:12.147553 | orchestrator | ++ MANAGER_VERSION=latest 2026-04-05 00:17:12.147557 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-04-05 00:17:12.147562 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-04-05 00:17:12.147568 | orchestrator | ++ export ARA=false 2026-04-05 00:17:12.147574 | orchestrator | ++ ARA=false 2026-04-05 00:17:12.147580 | orchestrator | ++ export DEPLOY_MODE=manager 2026-04-05 00:17:12.147586 | orchestrator | ++ DEPLOY_MODE=manager 2026-04-05 00:17:12.147592 | orchestrator | ++ export TEMPEST=true 2026-04-05 00:17:12.147598 | orchestrator | ++ TEMPEST=true 2026-04-05 00:17:12.147604 | orchestrator | ++ export IS_ZUUL=true 2026-04-05 00:17:12.147611 | orchestrator | ++ IS_ZUUL=true 2026-04-05 00:17:12.147620 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.221 2026-04-05 00:17:12.147628 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.221 2026-04-05 00:17:12.147633 | orchestrator | ++ export EXTERNAL_API=false 2026-04-05 00:17:12.147637 | orchestrator | ++ EXTERNAL_API=false 2026-04-05 00:17:12.147641 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-04-05 00:17:12.147644 | orchestrator | ++ IMAGE_USER=ubuntu 2026-04-05 00:17:12.147648 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-04-05 00:17:12.147663 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-04-05 00:17:12.147667 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-04-05 00:17:12.147774 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-04-05 00:17:12.147780 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-04-05 00:17:12.147885 | orchestrator | ++ export INTERACTIVE=false 2026-04-05 00:17:12.147891 | orchestrator | ++ INTERACTIVE=false 2026-04-05 00:17:12.148048 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-04-05 00:17:12.148058 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-04-05 00:17:12.148716 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2026-04-05 00:17:12.148725 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2026-04-05 00:17:12.148778 | orchestrator | + /opt/configuration/scripts/set-ceph-version.sh reef 2026-04-05 00:17:12.157048 | orchestrator | + set -e 2026-04-05 00:17:12.157063 | orchestrator | + VERSION=reef 2026-04-05 00:17:12.157395 | orchestrator | ++ grep '^ceph_version:' /opt/configuration/environments/manager/configuration.yml 2026-04-05 00:17:12.165031 | orchestrator | + [[ -n ceph_version: reef ]] 2026-04-05 00:17:12.165085 | orchestrator | + sed -i 's/ceph_version: .*/ceph_version: reef/g' /opt/configuration/environments/manager/configuration.yml 2026-04-05 00:17:12.170111 | orchestrator | + /opt/configuration/scripts/set-openstack-version.sh 2024.2 2026-04-05 00:17:12.177908 | orchestrator | + set -e 2026-04-05 00:17:12.177921 | orchestrator | + VERSION=2024.2 2026-04-05 00:17:12.178358 | orchestrator | ++ grep '^openstack_version:' /opt/configuration/environments/manager/configuration.yml 2026-04-05 00:17:12.182571 | orchestrator | + [[ -n openstack_version: 2024.2 ]] 2026-04-05 00:17:12.182588 | orchestrator | + sed -i 's/openstack_version: .*/openstack_version: 2024.2/g' /opt/configuration/environments/manager/configuration.yml 2026-04-05 00:17:12.187745 | orchestrator | + [[ ceph-ansible == \r\o\o\k ]] 2026-04-05 00:17:12.188806 | orchestrator | ++ semver latest 7.0.0 2026-04-05 00:17:12.255686 | orchestrator | + [[ -1 -ge 0 ]] 2026-04-05 00:17:12.255728 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2026-04-05 00:17:12.255734 | orchestrator | + echo 'enable_osism_kubernetes: true' 2026-04-05 00:17:12.257030 | orchestrator | ++ semver latest 10.0.0-0 2026-04-05 00:17:12.317147 | orchestrator | + [[ -1 -ge 0 ]] 2026-04-05 00:17:12.317904 | orchestrator | ++ semver 2024.2 2025.1 2026-04-05 00:17:12.380930 | orchestrator | + [[ -1 -ge 0 ]] 2026-04-05 00:17:12.380963 | orchestrator | + /opt/configuration/scripts/enable-resource-nodes.sh 2026-04-05 00:17:12.476505 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2026-04-05 00:17:12.478257 | orchestrator | + source /opt/venv/bin/activate 2026-04-05 00:17:12.480283 | orchestrator | ++ deactivate nondestructive 2026-04-05 00:17:12.480295 | orchestrator | ++ '[' -n '' ']' 2026-04-05 00:17:12.480299 | orchestrator | ++ '[' -n '' ']' 2026-04-05 00:17:12.480304 | orchestrator | ++ hash -r 2026-04-05 00:17:12.480308 | orchestrator | ++ '[' -n '' ']' 2026-04-05 00:17:12.480357 | orchestrator | ++ unset VIRTUAL_ENV 2026-04-05 00:17:12.480363 | orchestrator | ++ unset VIRTUAL_ENV_PROMPT 2026-04-05 00:17:12.480369 | orchestrator | ++ '[' '!' nondestructive = nondestructive ']' 2026-04-05 00:17:12.480409 | orchestrator | ++ '[' linux-gnu = cygwin ']' 2026-04-05 00:17:12.480414 | orchestrator | ++ '[' linux-gnu = msys ']' 2026-04-05 00:17:12.480418 | orchestrator | ++ export VIRTUAL_ENV=/opt/venv 2026-04-05 00:17:12.480472 | orchestrator | ++ VIRTUAL_ENV=/opt/venv 2026-04-05 00:17:12.480543 | orchestrator | ++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-04-05 00:17:12.480611 | orchestrator | ++ PATH=/opt/venv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-04-05 00:17:12.480617 | orchestrator | ++ export PATH 2026-04-05 00:17:12.480724 | orchestrator | ++ '[' -n '' ']' 2026-04-05 00:17:12.480799 | orchestrator | ++ '[' -z '' ']' 2026-04-05 00:17:12.480805 | orchestrator | ++ _OLD_VIRTUAL_PS1= 2026-04-05 00:17:12.480809 | orchestrator | ++ PS1='(venv) ' 2026-04-05 00:17:12.480866 | orchestrator | ++ export PS1 2026-04-05 00:17:12.480872 | orchestrator | ++ VIRTUAL_ENV_PROMPT='(venv) ' 2026-04-05 00:17:12.480876 | orchestrator | ++ export VIRTUAL_ENV_PROMPT 2026-04-05 00:17:12.480880 | orchestrator | ++ hash -r 2026-04-05 00:17:12.481117 | orchestrator | + ansible-playbook -i testbed-manager, --vault-password-file /opt/configuration/environments/.vault_pass /opt/configuration/ansible/manager-part-3.yml 2026-04-05 00:17:13.880487 | orchestrator | 2026-04-05 00:17:13.880556 | orchestrator | PLAY [Copy custom facts] ******************************************************* 2026-04-05 00:17:13.880563 | orchestrator | 2026-04-05 00:17:13.880568 | orchestrator | TASK [Create custom facts directory] ******************************************* 2026-04-05 00:17:14.488087 | orchestrator | ok: [testbed-manager] 2026-04-05 00:17:14.488156 | orchestrator | 2026-04-05 00:17:14.488163 | orchestrator | TASK [Copy fact files] ********************************************************* 2026-04-05 00:17:15.511797 | orchestrator | changed: [testbed-manager] 2026-04-05 00:17:15.511864 | orchestrator | 2026-04-05 00:17:15.511871 | orchestrator | PLAY [Before the deployment of the manager] ************************************ 2026-04-05 00:17:15.511876 | orchestrator | 2026-04-05 00:17:15.511881 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-04-05 00:17:18.171446 | orchestrator | ok: [testbed-manager] 2026-04-05 00:17:18.171557 | orchestrator | 2026-04-05 00:17:18.171574 | orchestrator | TASK [Get /opt/manager-vars.sh] ************************************************ 2026-04-05 00:17:18.227112 | orchestrator | ok: [testbed-manager] 2026-04-05 00:17:18.227218 | orchestrator | 2026-04-05 00:17:18.227237 | orchestrator | TASK [Add ara_server_mariadb_volume_type parameter] **************************** 2026-04-05 00:17:18.706350 | orchestrator | changed: [testbed-manager] 2026-04-05 00:17:18.706403 | orchestrator | 2026-04-05 00:17:18.706409 | orchestrator | TASK [Add netbox_enable parameter] ********************************************* 2026-04-05 00:17:18.749338 | orchestrator | skipping: [testbed-manager] 2026-04-05 00:17:18.749378 | orchestrator | 2026-04-05 00:17:18.749383 | orchestrator | TASK [Install HWE kernel package on Ubuntu] ************************************ 2026-04-05 00:17:19.121179 | orchestrator | changed: [testbed-manager] 2026-04-05 00:17:19.121250 | orchestrator | 2026-04-05 00:17:19.121255 | orchestrator | TASK [Check if /etc/OTC_region exist] ****************************************** 2026-04-05 00:17:19.458314 | orchestrator | ok: [testbed-manager] 2026-04-05 00:17:19.458393 | orchestrator | 2026-04-05 00:17:19.458400 | orchestrator | TASK [Add nova_compute_virt_type parameter] ************************************ 2026-04-05 00:17:19.583417 | orchestrator | skipping: [testbed-manager] 2026-04-05 00:17:19.583474 | orchestrator | 2026-04-05 00:17:19.583480 | orchestrator | PLAY [Apply role traefik] ****************************************************** 2026-04-05 00:17:19.583485 | orchestrator | 2026-04-05 00:17:19.583489 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-04-05 00:17:21.391545 | orchestrator | ok: [testbed-manager] 2026-04-05 00:17:21.391739 | orchestrator | 2026-04-05 00:17:21.391770 | orchestrator | TASK [Apply traefik role] ****************************************************** 2026-04-05 00:17:21.482148 | orchestrator | included: osism.services.traefik for testbed-manager 2026-04-05 00:17:21.482219 | orchestrator | 2026-04-05 00:17:21.482227 | orchestrator | TASK [osism.services.traefik : Include config tasks] *************************** 2026-04-05 00:17:21.555206 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/traefik/tasks/config.yml for testbed-manager 2026-04-05 00:17:21.555320 | orchestrator | 2026-04-05 00:17:21.555344 | orchestrator | TASK [osism.services.traefik : Create required directories] ******************** 2026-04-05 00:17:22.698008 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik) 2026-04-05 00:17:22.698112 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik/certificates) 2026-04-05 00:17:22.698123 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik/configuration) 2026-04-05 00:17:22.698131 | orchestrator | 2026-04-05 00:17:22.698139 | orchestrator | TASK [osism.services.traefik : Copy configuration files] *********************** 2026-04-05 00:17:24.539528 | orchestrator | changed: [testbed-manager] => (item=traefik.yml) 2026-04-05 00:17:24.539617 | orchestrator | changed: [testbed-manager] => (item=traefik.env) 2026-04-05 00:17:24.539626 | orchestrator | changed: [testbed-manager] => (item=certificates.yml) 2026-04-05 00:17:24.539633 | orchestrator | 2026-04-05 00:17:24.539640 | orchestrator | TASK [osism.services.traefik : Copy certificate cert files] ******************** 2026-04-05 00:17:25.219758 | orchestrator | changed: [testbed-manager] => (item=None) 2026-04-05 00:17:25.220849 | orchestrator | changed: [testbed-manager] 2026-04-05 00:17:25.220888 | orchestrator | 2026-04-05 00:17:25.220901 | orchestrator | TASK [osism.services.traefik : Copy certificate key files] ********************* 2026-04-05 00:17:25.874534 | orchestrator | changed: [testbed-manager] => (item=None) 2026-04-05 00:17:25.874689 | orchestrator | changed: [testbed-manager] 2026-04-05 00:17:25.874711 | orchestrator | 2026-04-05 00:17:25.874724 | orchestrator | TASK [osism.services.traefik : Copy dynamic configuration] ********************* 2026-04-05 00:17:25.931422 | orchestrator | skipping: [testbed-manager] 2026-04-05 00:17:25.931548 | orchestrator | 2026-04-05 00:17:25.931566 | orchestrator | TASK [osism.services.traefik : Remove dynamic configuration] ******************* 2026-04-05 00:17:26.342144 | orchestrator | ok: [testbed-manager] 2026-04-05 00:17:26.501869 | orchestrator | 2026-04-05 00:17:26.501948 | orchestrator | TASK [osism.services.traefik : Include service tasks] ************************** 2026-04-05 00:17:26.501986 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/traefik/tasks/service.yml for testbed-manager 2026-04-05 00:17:26.501999 | orchestrator | 2026-04-05 00:17:26.502011 | orchestrator | TASK [osism.services.traefik : Create traefik external network] **************** 2026-04-05 00:17:27.557638 | orchestrator | changed: [testbed-manager] 2026-04-05 00:17:27.557781 | orchestrator | 2026-04-05 00:17:27.557808 | orchestrator | TASK [osism.services.traefik : Copy docker-compose.yml file] ******************* 2026-04-05 00:17:28.440221 | orchestrator | changed: [testbed-manager] 2026-04-05 00:17:28.440336 | orchestrator | 2026-04-05 00:17:28.440357 | orchestrator | TASK [osism.services.traefik : Manage traefik service] ************************* 2026-04-05 00:17:39.652122 | orchestrator | changed: [testbed-manager] 2026-04-05 00:17:39.652232 | orchestrator | 2026-04-05 00:17:39.652280 | orchestrator | RUNNING HANDLER [osism.services.traefik : Restart traefik service] ************* 2026-04-05 00:17:39.707599 | orchestrator | skipping: [testbed-manager] 2026-04-05 00:17:39.707851 | orchestrator | 2026-04-05 00:17:39.707870 | orchestrator | PLAY [Deploy manager service] ************************************************** 2026-04-05 00:17:39.707883 | orchestrator | 2026-04-05 00:17:39.707895 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-04-05 00:17:41.617913 | orchestrator | ok: [testbed-manager] 2026-04-05 00:17:41.617999 | orchestrator | 2026-04-05 00:17:41.618079 | orchestrator | TASK [Apply manager role] ****************************************************** 2026-04-05 00:17:41.727478 | orchestrator | included: osism.services.manager for testbed-manager 2026-04-05 00:17:41.727574 | orchestrator | 2026-04-05 00:17:41.727590 | orchestrator | TASK [osism.services.manager : Include install tasks] ************************** 2026-04-05 00:17:41.796357 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/install-Debian-family.yml for testbed-manager 2026-04-05 00:17:41.796412 | orchestrator | 2026-04-05 00:17:41.796425 | orchestrator | TASK [osism.services.manager : Install required packages] ********************** 2026-04-05 00:17:44.425446 | orchestrator | ok: [testbed-manager] 2026-04-05 00:17:44.425547 | orchestrator | 2026-04-05 00:17:44.425563 | orchestrator | TASK [osism.services.manager : Gather variables for each operating system] ***** 2026-04-05 00:17:44.472923 | orchestrator | ok: [testbed-manager] 2026-04-05 00:17:44.473024 | orchestrator | 2026-04-05 00:17:44.473040 | orchestrator | TASK [osism.services.manager : Include config tasks] *************************** 2026-04-05 00:17:44.607722 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config.yml for testbed-manager 2026-04-05 00:17:44.607825 | orchestrator | 2026-04-05 00:17:44.607843 | orchestrator | TASK [osism.services.manager : Create required directories] ******************** 2026-04-05 00:17:47.517407 | orchestrator | changed: [testbed-manager] => (item=/opt/ansible) 2026-04-05 00:17:47.517517 | orchestrator | changed: [testbed-manager] => (item=/opt/archive) 2026-04-05 00:17:47.517533 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/configuration) 2026-04-05 00:17:47.517546 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/data) 2026-04-05 00:17:47.517557 | orchestrator | ok: [testbed-manager] => (item=/opt/manager) 2026-04-05 00:17:47.517568 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/secrets) 2026-04-05 00:17:47.517579 | orchestrator | changed: [testbed-manager] => (item=/opt/ansible/secrets) 2026-04-05 00:17:47.517591 | orchestrator | changed: [testbed-manager] => (item=/opt/state) 2026-04-05 00:17:47.517602 | orchestrator | 2026-04-05 00:17:47.517614 | orchestrator | TASK [osism.services.manager : Copy all environment file] ********************** 2026-04-05 00:17:48.165137 | orchestrator | changed: [testbed-manager] 2026-04-05 00:17:48.165241 | orchestrator | 2026-04-05 00:17:48.165257 | orchestrator | TASK [osism.services.manager : Copy client environment file] ******************* 2026-04-05 00:17:48.817589 | orchestrator | changed: [testbed-manager] 2026-04-05 00:17:48.817754 | orchestrator | 2026-04-05 00:17:48.817771 | orchestrator | TASK [osism.services.manager : Include ara config tasks] *********************** 2026-04-05 00:17:48.888909 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ara.yml for testbed-manager 2026-04-05 00:17:48.888997 | orchestrator | 2026-04-05 00:17:48.889009 | orchestrator | TASK [osism.services.manager : Copy ARA environment files] ********************* 2026-04-05 00:17:50.103313 | orchestrator | changed: [testbed-manager] => (item=ara) 2026-04-05 00:17:50.103428 | orchestrator | changed: [testbed-manager] => (item=ara-server) 2026-04-05 00:17:50.103444 | orchestrator | 2026-04-05 00:17:50.103455 | orchestrator | TASK [osism.services.manager : Copy MariaDB environment file] ****************** 2026-04-05 00:17:50.756107 | orchestrator | changed: [testbed-manager] 2026-04-05 00:17:50.756216 | orchestrator | 2026-04-05 00:17:50.756250 | orchestrator | TASK [osism.services.manager : Include vault config tasks] ********************* 2026-04-05 00:17:50.820550 | orchestrator | skipping: [testbed-manager] 2026-04-05 00:17:50.820716 | orchestrator | 2026-04-05 00:17:50.820740 | orchestrator | TASK [osism.services.manager : Include frontend config tasks] ****************** 2026-04-05 00:17:50.904558 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-frontend.yml for testbed-manager 2026-04-05 00:17:50.904693 | orchestrator | 2026-04-05 00:17:50.904710 | orchestrator | TASK [osism.services.manager : Copy frontend environment file] ***************** 2026-04-05 00:17:51.567277 | orchestrator | changed: [testbed-manager] 2026-04-05 00:17:51.567387 | orchestrator | 2026-04-05 00:17:51.567405 | orchestrator | TASK [osism.services.manager : Include ansible config tasks] ******************* 2026-04-05 00:17:51.634712 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ansible.yml for testbed-manager 2026-04-05 00:17:51.634865 | orchestrator | 2026-04-05 00:17:51.634884 | orchestrator | TASK [osism.services.manager : Copy private ssh keys] ************************** 2026-04-05 00:17:53.026117 | orchestrator | changed: [testbed-manager] => (item=None) 2026-04-05 00:17:53.026244 | orchestrator | changed: [testbed-manager] => (item=None) 2026-04-05 00:17:53.026263 | orchestrator | changed: [testbed-manager] 2026-04-05 00:17:53.026290 | orchestrator | 2026-04-05 00:17:53.026303 | orchestrator | TASK [osism.services.manager : Copy ansible environment file] ****************** 2026-04-05 00:17:53.686878 | orchestrator | changed: [testbed-manager] 2026-04-05 00:17:53.686999 | orchestrator | 2026-04-05 00:17:53.687023 | orchestrator | TASK [osism.services.manager : Include netbox config tasks] ******************** 2026-04-05 00:17:53.746300 | orchestrator | skipping: [testbed-manager] 2026-04-05 00:17:53.746392 | orchestrator | 2026-04-05 00:17:53.746406 | orchestrator | TASK [osism.services.manager : Include celery config tasks] ******************** 2026-04-05 00:17:53.837283 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-celery.yml for testbed-manager 2026-04-05 00:17:53.837370 | orchestrator | 2026-04-05 00:17:53.837381 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_watches] **************** 2026-04-05 00:17:54.401973 | orchestrator | changed: [testbed-manager] 2026-04-05 00:17:54.402168 | orchestrator | 2026-04-05 00:17:54.402215 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_instances] ************** 2026-04-05 00:17:54.815610 | orchestrator | changed: [testbed-manager] 2026-04-05 00:17:54.815740 | orchestrator | 2026-04-05 00:17:54.815757 | orchestrator | TASK [osism.services.manager : Copy celery environment files] ****************** 2026-04-05 00:17:56.074839 | orchestrator | changed: [testbed-manager] => (item=conductor) 2026-04-05 00:17:56.074912 | orchestrator | changed: [testbed-manager] => (item=openstack) 2026-04-05 00:17:56.074918 | orchestrator | 2026-04-05 00:17:56.074924 | orchestrator | TASK [osism.services.manager : Copy listener environment file] ***************** 2026-04-05 00:17:56.727965 | orchestrator | changed: [testbed-manager] 2026-04-05 00:17:56.728079 | orchestrator | 2026-04-05 00:17:56.728098 | orchestrator | TASK [osism.services.manager : Check for conductor.yml] ************************ 2026-04-05 00:17:57.103209 | orchestrator | ok: [testbed-manager] 2026-04-05 00:17:57.103316 | orchestrator | 2026-04-05 00:17:57.103332 | orchestrator | TASK [osism.services.manager : Copy conductor configuration file] ************** 2026-04-05 00:17:57.465570 | orchestrator | changed: [testbed-manager] 2026-04-05 00:17:57.465762 | orchestrator | 2026-04-05 00:17:57.465788 | orchestrator | TASK [osism.services.manager : Copy empty conductor configuration file] ******** 2026-04-05 00:17:57.507691 | orchestrator | skipping: [testbed-manager] 2026-04-05 00:17:57.507812 | orchestrator | 2026-04-05 00:17:57.507836 | orchestrator | TASK [osism.services.manager : Include wrapper config tasks] ******************* 2026-04-05 00:17:57.573333 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-wrapper.yml for testbed-manager 2026-04-05 00:17:57.573420 | orchestrator | 2026-04-05 00:17:57.573432 | orchestrator | TASK [osism.services.manager : Include wrapper vars file] ********************** 2026-04-05 00:17:57.622112 | orchestrator | ok: [testbed-manager] 2026-04-05 00:17:57.622201 | orchestrator | 2026-04-05 00:17:57.622215 | orchestrator | TASK [osism.services.manager : Copy wrapper scripts] *************************** 2026-04-05 00:17:59.681067 | orchestrator | changed: [testbed-manager] => (item=osism) 2026-04-05 00:17:59.681184 | orchestrator | changed: [testbed-manager] => (item=osism-update-docker) 2026-04-05 00:17:59.681202 | orchestrator | changed: [testbed-manager] => (item=osism-update-manager) 2026-04-05 00:17:59.681216 | orchestrator | 2026-04-05 00:17:59.681230 | orchestrator | TASK [osism.services.manager : Copy cilium wrapper script] ********************* 2026-04-05 00:18:00.403332 | orchestrator | changed: [testbed-manager] 2026-04-05 00:18:00.403407 | orchestrator | 2026-04-05 00:18:00.403416 | orchestrator | TASK [osism.services.manager : Copy hubble wrapper script] ********************* 2026-04-05 00:18:01.155215 | orchestrator | changed: [testbed-manager] 2026-04-05 00:18:01.155319 | orchestrator | 2026-04-05 00:18:01.155336 | orchestrator | TASK [osism.services.manager : Copy flux wrapper script] *********************** 2026-04-05 00:18:01.881994 | orchestrator | changed: [testbed-manager] 2026-04-05 00:18:01.882160 | orchestrator | 2026-04-05 00:18:01.882182 | orchestrator | TASK [osism.services.manager : Include scripts config tasks] ******************* 2026-04-05 00:18:01.957252 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-scripts.yml for testbed-manager 2026-04-05 00:18:01.957381 | orchestrator | 2026-04-05 00:18:01.957405 | orchestrator | TASK [osism.services.manager : Include scripts vars file] ********************** 2026-04-05 00:18:02.004590 | orchestrator | ok: [testbed-manager] 2026-04-05 00:18:02.004736 | orchestrator | 2026-04-05 00:18:02.004752 | orchestrator | TASK [osism.services.manager : Copy scripts] *********************************** 2026-04-05 00:18:02.749043 | orchestrator | changed: [testbed-manager] => (item=osism-include) 2026-04-05 00:18:02.749148 | orchestrator | 2026-04-05 00:18:02.749165 | orchestrator | TASK [osism.services.manager : Include service tasks] ************************** 2026-04-05 00:18:02.833687 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/service.yml for testbed-manager 2026-04-05 00:18:02.833784 | orchestrator | 2026-04-05 00:18:02.833799 | orchestrator | TASK [osism.services.manager : Copy manager systemd unit file] ***************** 2026-04-05 00:18:03.544499 | orchestrator | changed: [testbed-manager] 2026-04-05 00:18:03.544601 | orchestrator | 2026-04-05 00:18:03.544618 | orchestrator | TASK [osism.services.manager : Create traefik external network] **************** 2026-04-05 00:18:04.208274 | orchestrator | ok: [testbed-manager] 2026-04-05 00:18:04.208381 | orchestrator | 2026-04-05 00:18:04.208398 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb < 11.0.0] *** 2026-04-05 00:18:04.264278 | orchestrator | skipping: [testbed-manager] 2026-04-05 00:18:04.264371 | orchestrator | 2026-04-05 00:18:04.264385 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb >= 11.0.0] *** 2026-04-05 00:18:04.329057 | orchestrator | ok: [testbed-manager] 2026-04-05 00:18:04.329151 | orchestrator | 2026-04-05 00:18:04.329166 | orchestrator | TASK [osism.services.manager : Copy docker-compose.yml file] ******************* 2026-04-05 00:18:05.192706 | orchestrator | changed: [testbed-manager] 2026-04-05 00:18:05.192822 | orchestrator | 2026-04-05 00:18:05.192842 | orchestrator | TASK [osism.services.manager : Pull container images] ************************** 2026-04-05 00:19:18.325963 | orchestrator | changed: [testbed-manager] 2026-04-05 00:19:18.326137 | orchestrator | 2026-04-05 00:19:18.326156 | orchestrator | TASK [osism.services.manager : Stop and disable old service docker-compose@manager] *** 2026-04-05 00:19:19.323351 | orchestrator | ok: [testbed-manager] 2026-04-05 00:19:19.323470 | orchestrator | 2026-04-05 00:19:19.323491 | orchestrator | TASK [osism.services.manager : Do a manual start of the manager service] ******* 2026-04-05 00:19:19.387648 | orchestrator | skipping: [testbed-manager] 2026-04-05 00:19:19.387772 | orchestrator | 2026-04-05 00:19:19.387798 | orchestrator | TASK [osism.services.manager : Manage manager service] ************************* 2026-04-05 00:19:21.815855 | orchestrator | changed: [testbed-manager] 2026-04-05 00:19:21.815957 | orchestrator | 2026-04-05 00:19:21.815974 | orchestrator | TASK [osism.services.manager : Register that manager service was started] ****** 2026-04-05 00:19:21.920398 | orchestrator | ok: [testbed-manager] 2026-04-05 00:19:21.920469 | orchestrator | 2026-04-05 00:19:21.920494 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2026-04-05 00:19:21.920500 | orchestrator | 2026-04-05 00:19:21.920506 | orchestrator | RUNNING HANDLER [osism.services.manager : Restart manager service] ************* 2026-04-05 00:19:21.982962 | orchestrator | skipping: [testbed-manager] 2026-04-05 00:19:21.983043 | orchestrator | 2026-04-05 00:19:21.983054 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for manager service to start] *** 2026-04-05 00:20:22.029516 | orchestrator | Pausing for 60 seconds 2026-04-05 00:20:22.029725 | orchestrator | changed: [testbed-manager] 2026-04-05 00:20:22.029744 | orchestrator | 2026-04-05 00:20:22.029757 | orchestrator | RUNNING HANDLER [osism.services.manager : Ensure that all containers are up] *** 2026-04-05 00:20:25.144201 | orchestrator | changed: [testbed-manager] 2026-04-05 00:20:25.144287 | orchestrator | 2026-04-05 00:20:25.144298 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for an healthy manager service] *** 2026-04-05 00:21:27.207614 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (50 retries left). 2026-04-05 00:21:27.207714 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (49 retries left). 2026-04-05 00:21:27.207729 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (48 retries left). 2026-04-05 00:21:27.207764 | orchestrator | changed: [testbed-manager] 2026-04-05 00:21:27.207777 | orchestrator | 2026-04-05 00:21:27.207789 | orchestrator | RUNNING HANDLER [osism.services.manager : Copy osismclient bash completion script] *** 2026-04-05 00:21:32.779686 | orchestrator | changed: [testbed-manager] 2026-04-05 00:21:32.779799 | orchestrator | 2026-04-05 00:21:32.779816 | orchestrator | TASK [osism.services.manager : Include initialize tasks] *********************** 2026-04-05 00:21:32.880161 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/initialize.yml for testbed-manager 2026-04-05 00:21:32.880260 | orchestrator | 2026-04-05 00:21:32.880276 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2026-04-05 00:21:32.880288 | orchestrator | 2026-04-05 00:21:32.880299 | orchestrator | TASK [osism.services.manager : Include vault initialize tasks] ***************** 2026-04-05 00:21:32.932881 | orchestrator | skipping: [testbed-manager] 2026-04-05 00:21:32.932962 | orchestrator | 2026-04-05 00:21:32.932972 | orchestrator | TASK [osism.services.manager : Include version verification tasks] ************* 2026-04-05 00:21:33.018208 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/verify-versions.yml for testbed-manager 2026-04-05 00:21:33.018301 | orchestrator | 2026-04-05 00:21:33.018313 | orchestrator | TASK [osism.services.manager : Deploy service manager version check script] **** 2026-04-05 00:21:33.890955 | orchestrator | changed: [testbed-manager] 2026-04-05 00:21:33.891081 | orchestrator | 2026-04-05 00:21:33.891100 | orchestrator | TASK [osism.services.manager : Execute service manager version check] ********** 2026-04-05 00:21:37.282863 | orchestrator | ok: [testbed-manager] 2026-04-05 00:21:37.282978 | orchestrator | 2026-04-05 00:21:37.282997 | orchestrator | TASK [osism.services.manager : Display version check results] ****************** 2026-04-05 00:21:37.355935 | orchestrator | ok: [testbed-manager] => { 2026-04-05 00:21:37.356062 | orchestrator | "version_check_result.stdout_lines": [ 2026-04-05 00:21:37.356087 | orchestrator | "=== OSISM Container Version Check ===", 2026-04-05 00:21:37.356108 | orchestrator | "Checking running containers against expected versions...", 2026-04-05 00:21:37.356127 | orchestrator | "", 2026-04-05 00:21:37.356146 | orchestrator | "Checking service: inventory_reconciler (Inventory Reconciler Service)", 2026-04-05 00:21:37.356163 | orchestrator | " Expected: registry.osism.tech/osism/inventory-reconciler:latest", 2026-04-05 00:21:37.356181 | orchestrator | " Enabled: true", 2026-04-05 00:21:37.356199 | orchestrator | " Running: registry.osism.tech/osism/inventory-reconciler:latest", 2026-04-05 00:21:37.356216 | orchestrator | " Status: ✅ MATCH", 2026-04-05 00:21:37.356234 | orchestrator | "", 2026-04-05 00:21:37.356253 | orchestrator | "Checking service: osism-ansible (OSISM Ansible Service)", 2026-04-05 00:21:37.356273 | orchestrator | " Expected: registry.osism.tech/osism/osism-ansible:latest", 2026-04-05 00:21:37.356291 | orchestrator | " Enabled: true", 2026-04-05 00:21:37.356306 | orchestrator | " Running: registry.osism.tech/osism/osism-ansible:latest", 2026-04-05 00:21:37.356318 | orchestrator | " Status: ✅ MATCH", 2026-04-05 00:21:37.356329 | orchestrator | "", 2026-04-05 00:21:37.356340 | orchestrator | "Checking service: osism-kubernetes (Osism-Kubernetes Service)", 2026-04-05 00:21:37.356351 | orchestrator | " Expected: registry.osism.tech/osism/osism-kubernetes:latest", 2026-04-05 00:21:37.356363 | orchestrator | " Enabled: true", 2026-04-05 00:21:37.356374 | orchestrator | " Running: registry.osism.tech/osism/osism-kubernetes:latest", 2026-04-05 00:21:37.356385 | orchestrator | " Status: ✅ MATCH", 2026-04-05 00:21:37.356396 | orchestrator | "", 2026-04-05 00:21:37.356407 | orchestrator | "Checking service: ceph-ansible (Ceph-Ansible Service)", 2026-04-05 00:21:37.356419 | orchestrator | " Expected: registry.osism.tech/osism/ceph-ansible:reef", 2026-04-05 00:21:37.356432 | orchestrator | " Enabled: true", 2026-04-05 00:21:37.356444 | orchestrator | " Running: registry.osism.tech/osism/ceph-ansible:reef", 2026-04-05 00:21:37.356457 | orchestrator | " Status: ✅ MATCH", 2026-04-05 00:21:37.356471 | orchestrator | "", 2026-04-05 00:21:37.356483 | orchestrator | "Checking service: kolla-ansible (Kolla-Ansible Service)", 2026-04-05 00:21:37.356522 | orchestrator | " Expected: registry.osism.tech/osism/kolla-ansible:2024.2", 2026-04-05 00:21:37.356566 | orchestrator | " Enabled: true", 2026-04-05 00:21:37.356581 | orchestrator | " Running: registry.osism.tech/osism/kolla-ansible:2024.2", 2026-04-05 00:21:37.356592 | orchestrator | " Status: ✅ MATCH", 2026-04-05 00:21:37.356603 | orchestrator | "", 2026-04-05 00:21:37.356614 | orchestrator | "Checking service: osismclient (OSISM Client)", 2026-04-05 00:21:37.356626 | orchestrator | " Expected: registry.osism.tech/osism/osism:latest", 2026-04-05 00:21:37.356637 | orchestrator | " Enabled: true", 2026-04-05 00:21:37.356648 | orchestrator | " Running: registry.osism.tech/osism/osism:latest", 2026-04-05 00:21:37.356659 | orchestrator | " Status: ✅ MATCH", 2026-04-05 00:21:37.356670 | orchestrator | "", 2026-04-05 00:21:37.356682 | orchestrator | "Checking service: ara-server (ARA Server)", 2026-04-05 00:21:37.356693 | orchestrator | " Expected: registry.osism.tech/osism/ara-server:1.7.3", 2026-04-05 00:21:37.356704 | orchestrator | " Enabled: true", 2026-04-05 00:21:37.356715 | orchestrator | " Running: registry.osism.tech/osism/ara-server:1.7.3", 2026-04-05 00:21:37.356726 | orchestrator | " Status: ✅ MATCH", 2026-04-05 00:21:37.356738 | orchestrator | "", 2026-04-05 00:21:37.356749 | orchestrator | "Checking service: mariadb (MariaDB for ARA)", 2026-04-05 00:21:37.356760 | orchestrator | " Expected: registry.osism.tech/dockerhub/library/mariadb:11.8.4", 2026-04-05 00:21:37.356775 | orchestrator | " Enabled: true", 2026-04-05 00:21:37.356798 | orchestrator | " Running: registry.osism.tech/dockerhub/library/mariadb:11.8.4", 2026-04-05 00:21:37.356837 | orchestrator | " Status: ✅ MATCH", 2026-04-05 00:21:37.356856 | orchestrator | "", 2026-04-05 00:21:37.356874 | orchestrator | "Checking service: frontend (OSISM Frontend)", 2026-04-05 00:21:37.356898 | orchestrator | " Expected: registry.osism.tech/osism/osism-frontend:latest", 2026-04-05 00:21:37.356915 | orchestrator | " Enabled: true", 2026-04-05 00:21:37.356934 | orchestrator | " Running: registry.osism.tech/osism/osism-frontend:latest", 2026-04-05 00:21:37.356951 | orchestrator | " Status: ✅ MATCH", 2026-04-05 00:21:37.356970 | orchestrator | "", 2026-04-05 00:21:37.356988 | orchestrator | "Checking service: redis (Redis Cache)", 2026-04-05 00:21:37.357006 | orchestrator | " Expected: registry.osism.tech/dockerhub/library/redis:7.4.7-alpine", 2026-04-05 00:21:37.357024 | orchestrator | " Enabled: true", 2026-04-05 00:21:37.357044 | orchestrator | " Running: registry.osism.tech/dockerhub/library/redis:7.4.7-alpine", 2026-04-05 00:21:37.357062 | orchestrator | " Status: ✅ MATCH", 2026-04-05 00:21:37.357081 | orchestrator | "", 2026-04-05 00:21:37.357100 | orchestrator | "Checking service: api (OSISM API Service)", 2026-04-05 00:21:37.357118 | orchestrator | " Expected: registry.osism.tech/osism/osism:latest", 2026-04-05 00:21:37.357136 | orchestrator | " Enabled: true", 2026-04-05 00:21:37.357155 | orchestrator | " Running: registry.osism.tech/osism/osism:latest", 2026-04-05 00:21:37.357174 | orchestrator | " Status: ✅ MATCH", 2026-04-05 00:21:37.357193 | orchestrator | "", 2026-04-05 00:21:37.357205 | orchestrator | "Checking service: listener (OpenStack Event Listener)", 2026-04-05 00:21:37.357216 | orchestrator | " Expected: registry.osism.tech/osism/osism:latest", 2026-04-05 00:21:37.357227 | orchestrator | " Enabled: true", 2026-04-05 00:21:37.357238 | orchestrator | " Running: registry.osism.tech/osism/osism:latest", 2026-04-05 00:21:37.357249 | orchestrator | " Status: ✅ MATCH", 2026-04-05 00:21:37.357260 | orchestrator | "", 2026-04-05 00:21:37.357271 | orchestrator | "Checking service: openstack (OpenStack Integration)", 2026-04-05 00:21:37.357282 | orchestrator | " Expected: registry.osism.tech/osism/osism:latest", 2026-04-05 00:21:37.357292 | orchestrator | " Enabled: true", 2026-04-05 00:21:37.357303 | orchestrator | " Running: registry.osism.tech/osism/osism:latest", 2026-04-05 00:21:37.357314 | orchestrator | " Status: ✅ MATCH", 2026-04-05 00:21:37.357325 | orchestrator | "", 2026-04-05 00:21:37.357336 | orchestrator | "Checking service: beat (Celery Beat Scheduler)", 2026-04-05 00:21:37.357346 | orchestrator | " Expected: registry.osism.tech/osism/osism:latest", 2026-04-05 00:21:37.357357 | orchestrator | " Enabled: true", 2026-04-05 00:21:37.357380 | orchestrator | " Running: registry.osism.tech/osism/osism:latest", 2026-04-05 00:21:37.357391 | orchestrator | " Status: ✅ MATCH", 2026-04-05 00:21:37.357402 | orchestrator | "", 2026-04-05 00:21:37.357413 | orchestrator | "Checking service: flower (Celery Flower Monitor)", 2026-04-05 00:21:37.357447 | orchestrator | " Expected: registry.osism.tech/osism/osism:latest", 2026-04-05 00:21:37.357459 | orchestrator | " Enabled: true", 2026-04-05 00:21:37.357470 | orchestrator | " Running: registry.osism.tech/osism/osism:latest", 2026-04-05 00:21:37.357480 | orchestrator | " Status: ✅ MATCH", 2026-04-05 00:21:37.357492 | orchestrator | "", 2026-04-05 00:21:37.357503 | orchestrator | "=== Summary ===", 2026-04-05 00:21:37.357513 | orchestrator | "Errors (version mismatches): 0", 2026-04-05 00:21:37.357524 | orchestrator | "Warnings (expected containers not running): 0", 2026-04-05 00:21:37.357561 | orchestrator | "", 2026-04-05 00:21:37.357581 | orchestrator | "✅ All running containers match expected versions!" 2026-04-05 00:21:37.357593 | orchestrator | ] 2026-04-05 00:21:37.357604 | orchestrator | } 2026-04-05 00:21:37.357616 | orchestrator | 2026-04-05 00:21:37.357627 | orchestrator | TASK [osism.services.manager : Skip version check due to service configuration] *** 2026-04-05 00:21:37.423021 | orchestrator | skipping: [testbed-manager] 2026-04-05 00:21:37.423121 | orchestrator | 2026-04-05 00:21:37.423136 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-05 00:21:37.423149 | orchestrator | testbed-manager : ok=70 changed=37 unreachable=0 failed=0 skipped=12 rescued=0 ignored=0 2026-04-05 00:21:37.423161 | orchestrator | 2026-04-05 00:21:37.543009 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2026-04-05 00:21:37.543106 | orchestrator | + deactivate 2026-04-05 00:21:37.543123 | orchestrator | + '[' -n /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin ']' 2026-04-05 00:21:37.543136 | orchestrator | + PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-04-05 00:21:37.543147 | orchestrator | + export PATH 2026-04-05 00:21:37.543159 | orchestrator | + unset _OLD_VIRTUAL_PATH 2026-04-05 00:21:37.543170 | orchestrator | + '[' -n '' ']' 2026-04-05 00:21:37.543181 | orchestrator | + hash -r 2026-04-05 00:21:37.543192 | orchestrator | + '[' -n '' ']' 2026-04-05 00:21:37.543202 | orchestrator | + unset VIRTUAL_ENV 2026-04-05 00:21:37.543213 | orchestrator | + unset VIRTUAL_ENV_PROMPT 2026-04-05 00:21:37.543224 | orchestrator | + '[' '!' '' = nondestructive ']' 2026-04-05 00:21:37.543236 | orchestrator | + unset -f deactivate 2026-04-05 00:21:37.543247 | orchestrator | + cp /home/dragon/.ssh/id_rsa.pub /opt/ansible/secrets/id_rsa.operator.pub 2026-04-05 00:21:37.554644 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2026-04-05 00:21:37.554737 | orchestrator | + wait_for_container_healthy 60 ceph-ansible 2026-04-05 00:21:37.554753 | orchestrator | + local max_attempts=60 2026-04-05 00:21:37.554764 | orchestrator | + local name=ceph-ansible 2026-04-05 00:21:37.554776 | orchestrator | + local attempt_num=1 2026-04-05 00:21:37.555771 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-04-05 00:21:37.603938 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-04-05 00:21:37.604087 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2026-04-05 00:21:37.604115 | orchestrator | + local max_attempts=60 2026-04-05 00:21:37.604134 | orchestrator | + local name=kolla-ansible 2026-04-05 00:21:37.604180 | orchestrator | + local attempt_num=1 2026-04-05 00:21:37.604764 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2026-04-05 00:21:37.646116 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-04-05 00:21:37.646237 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2026-04-05 00:21:37.646254 | orchestrator | + local max_attempts=60 2026-04-05 00:21:37.646265 | orchestrator | + local name=osism-ansible 2026-04-05 00:21:37.646277 | orchestrator | + local attempt_num=1 2026-04-05 00:21:37.646379 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2026-04-05 00:21:37.680597 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-04-05 00:21:37.680651 | orchestrator | + [[ true == \t\r\u\e ]] 2026-04-05 00:21:37.680656 | orchestrator | + sh -c /opt/configuration/scripts/disable-ara.sh 2026-04-05 00:21:38.351170 | orchestrator | + docker compose --project-directory /opt/manager ps 2026-04-05 00:21:38.517949 | orchestrator | NAME IMAGE COMMAND SERVICE CREATED STATUS PORTS 2026-04-05 00:21:38.518137 | orchestrator | ceph-ansible registry.osism.tech/osism/ceph-ansible:reef "/entrypoint.sh osis…" ceph-ansible 2 minutes ago Up About a minute (healthy) 2026-04-05 00:21:38.518157 | orchestrator | kolla-ansible registry.osism.tech/osism/kolla-ansible:2024.2 "/entrypoint.sh osis…" kolla-ansible 2 minutes ago Up About a minute (healthy) 2026-04-05 00:21:38.518169 | orchestrator | manager-api-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" api 2 minutes ago Up 2 minutes (healthy) 192.168.16.5:8000->8000/tcp 2026-04-05 00:21:38.518182 | orchestrator | manager-ara-server-1 registry.osism.tech/osism/ara-server:1.7.3 "sh -c '/wait && /ru…" ara-server 2 minutes ago Up 2 minutes (healthy) 8000/tcp 2026-04-05 00:21:38.518193 | orchestrator | manager-beat-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" beat 2 minutes ago Up 2 minutes (healthy) 2026-04-05 00:21:38.518204 | orchestrator | manager-flower-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" flower 2 minutes ago Up 2 minutes (healthy) 2026-04-05 00:21:38.518215 | orchestrator | manager-inventory_reconciler-1 registry.osism.tech/osism/inventory-reconciler:latest "/sbin/tini -- /entr…" inventory_reconciler 2 minutes ago Up About a minute (healthy) 2026-04-05 00:21:38.518240 | orchestrator | manager-listener-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" listener 2 minutes ago Up 2 minutes (healthy) 2026-04-05 00:21:38.518252 | orchestrator | manager-mariadb-1 registry.osism.tech/dockerhub/library/mariadb:11.8.4 "docker-entrypoint.s…" mariadb 2 minutes ago Up 2 minutes (healthy) 3306/tcp 2026-04-05 00:21:38.518263 | orchestrator | manager-openstack-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" openstack 2 minutes ago Up 2 minutes (healthy) 2026-04-05 00:21:38.518273 | orchestrator | manager-redis-1 registry.osism.tech/dockerhub/library/redis:7.4.7-alpine "docker-entrypoint.s…" redis 2 minutes ago Up 2 minutes (healthy) 6379/tcp 2026-04-05 00:21:38.518284 | orchestrator | osism-ansible registry.osism.tech/osism/osism-ansible:latest "/entrypoint.sh osis…" osism-ansible 2 minutes ago Up About a minute (healthy) 2026-04-05 00:21:38.518295 | orchestrator | osism-frontend registry.osism.tech/osism/osism-frontend:latest "docker-entrypoint.s…" frontend 2 minutes ago Up 2 minutes 192.168.16.5:3000->3000/tcp 2026-04-05 00:21:38.518306 | orchestrator | osism-kubernetes registry.osism.tech/osism/osism-kubernetes:latest "/entrypoint.sh osis…" osism-kubernetes 2 minutes ago Up About a minute (healthy) 2026-04-05 00:21:38.518317 | orchestrator | osismclient registry.osism.tech/osism/osism:latest "/sbin/tini -- sleep…" osismclient 2 minutes ago Up 2 minutes (healthy) 2026-04-05 00:21:38.523570 | orchestrator | ++ semver latest 7.0.0 2026-04-05 00:21:38.578864 | orchestrator | + [[ -1 -ge 0 ]] 2026-04-05 00:21:38.578941 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2026-04-05 00:21:38.578955 | orchestrator | + sed -i s/community.general.yaml/osism.commons.still_alive/ /opt/configuration/environments/ansible.cfg 2026-04-05 00:21:38.584336 | orchestrator | + osism apply resolvconf -l testbed-manager 2026-04-05 00:21:50.976970 | orchestrator | 2026-04-05 00:21:50 | INFO  | Prepare task for execution of resolvconf. 2026-04-05 00:21:51.191388 | orchestrator | 2026-04-05 00:21:51 | INFO  | Task 5e959d25-dff5-4ce1-bfcf-72ad6d87b33d (resolvconf) was prepared for execution. 2026-04-05 00:21:51.191509 | orchestrator | 2026-04-05 00:21:51 | INFO  | It takes a moment until task 5e959d25-dff5-4ce1-bfcf-72ad6d87b33d (resolvconf) has been started and output is visible here. 2026-04-05 00:22:03.682852 | orchestrator | 2026-04-05 00:22:03.682986 | orchestrator | PLAY [Apply role resolvconf] *************************************************** 2026-04-05 00:22:03.683054 | orchestrator | 2026-04-05 00:22:03.683077 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-04-05 00:22:03.683096 | orchestrator | Sunday 05 April 2026 00:21:54 +0000 (0:00:00.193) 0:00:00.193 ********** 2026-04-05 00:22:03.683127 | orchestrator | ok: [testbed-manager] 2026-04-05 00:22:03.683148 | orchestrator | 2026-04-05 00:22:03.683169 | orchestrator | TASK [osism.commons.resolvconf : Check minimum and maximum number of name servers] *** 2026-04-05 00:22:03.683191 | orchestrator | Sunday 05 April 2026 00:21:57 +0000 (0:00:03.701) 0:00:03.894 ********** 2026-04-05 00:22:03.683209 | orchestrator | skipping: [testbed-manager] 2026-04-05 00:22:03.683229 | orchestrator | 2026-04-05 00:22:03.683246 | orchestrator | TASK [osism.commons.resolvconf : Include resolvconf tasks] ********************* 2026-04-05 00:22:03.683264 | orchestrator | Sunday 05 April 2026 00:21:57 +0000 (0:00:00.064) 0:00:03.959 ********** 2026-04-05 00:22:03.683283 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-resolv.yml for testbed-manager 2026-04-05 00:22:03.683299 | orchestrator | 2026-04-05 00:22:03.683315 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific installation tasks] *** 2026-04-05 00:22:03.683332 | orchestrator | Sunday 05 April 2026 00:21:57 +0000 (0:00:00.086) 0:00:04.046 ********** 2026-04-05 00:22:03.683349 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/install-Debian-family.yml for testbed-manager 2026-04-05 00:22:03.683367 | orchestrator | 2026-04-05 00:22:03.683401 | orchestrator | TASK [osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf] *** 2026-04-05 00:22:03.683421 | orchestrator | Sunday 05 April 2026 00:21:58 +0000 (0:00:00.086) 0:00:04.133 ********** 2026-04-05 00:22:03.683440 | orchestrator | ok: [testbed-manager] 2026-04-05 00:22:03.683459 | orchestrator | 2026-04-05 00:22:03.683478 | orchestrator | TASK [osism.commons.resolvconf : Install package systemd-resolved] ************* 2026-04-05 00:22:03.683497 | orchestrator | Sunday 05 April 2026 00:21:59 +0000 (0:00:01.063) 0:00:05.196 ********** 2026-04-05 00:22:03.683515 | orchestrator | skipping: [testbed-manager] 2026-04-05 00:22:03.683590 | orchestrator | 2026-04-05 00:22:03.683612 | orchestrator | TASK [osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf] ***** 2026-04-05 00:22:03.683631 | orchestrator | Sunday 05 April 2026 00:21:59 +0000 (0:00:00.057) 0:00:05.253 ********** 2026-04-05 00:22:03.683651 | orchestrator | ok: [testbed-manager] 2026-04-05 00:22:03.683670 | orchestrator | 2026-04-05 00:22:03.683690 | orchestrator | TASK [osism.commons.resolvconf : Archive existing file /etc/resolv.conf] ******* 2026-04-05 00:22:03.683710 | orchestrator | Sunday 05 April 2026 00:21:59 +0000 (0:00:00.541) 0:00:05.795 ********** 2026-04-05 00:22:03.683730 | orchestrator | skipping: [testbed-manager] 2026-04-05 00:22:03.683747 | orchestrator | 2026-04-05 00:22:03.683765 | orchestrator | TASK [osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf] *** 2026-04-05 00:22:03.683784 | orchestrator | Sunday 05 April 2026 00:21:59 +0000 (0:00:00.072) 0:00:05.867 ********** 2026-04-05 00:22:03.683804 | orchestrator | changed: [testbed-manager] 2026-04-05 00:22:03.683825 | orchestrator | 2026-04-05 00:22:03.683845 | orchestrator | TASK [osism.commons.resolvconf : Copy configuration files] ********************* 2026-04-05 00:22:03.683866 | orchestrator | Sunday 05 April 2026 00:22:00 +0000 (0:00:00.526) 0:00:06.394 ********** 2026-04-05 00:22:03.683885 | orchestrator | changed: [testbed-manager] 2026-04-05 00:22:03.683904 | orchestrator | 2026-04-05 00:22:03.683923 | orchestrator | TASK [osism.commons.resolvconf : Start/enable systemd-resolved service] ******** 2026-04-05 00:22:03.683942 | orchestrator | Sunday 05 April 2026 00:22:01 +0000 (0:00:01.061) 0:00:07.455 ********** 2026-04-05 00:22:03.683962 | orchestrator | ok: [testbed-manager] 2026-04-05 00:22:03.684007 | orchestrator | 2026-04-05 00:22:03.684026 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific configuration tasks] *** 2026-04-05 00:22:03.684044 | orchestrator | Sunday 05 April 2026 00:22:02 +0000 (0:00:00.976) 0:00:08.432 ********** 2026-04-05 00:22:03.684064 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-Debian-family.yml for testbed-manager 2026-04-05 00:22:03.684083 | orchestrator | 2026-04-05 00:22:03.684102 | orchestrator | TASK [osism.commons.resolvconf : Restart systemd-resolved service] ************* 2026-04-05 00:22:03.684121 | orchestrator | Sunday 05 April 2026 00:22:02 +0000 (0:00:00.074) 0:00:08.506 ********** 2026-04-05 00:22:03.684141 | orchestrator | changed: [testbed-manager] 2026-04-05 00:22:03.684159 | orchestrator | 2026-04-05 00:22:03.684180 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-05 00:22:03.684200 | orchestrator | testbed-manager : ok=10  changed=3  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-04-05 00:22:03.684218 | orchestrator | 2026-04-05 00:22:03.684235 | orchestrator | 2026-04-05 00:22:03.684253 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-05 00:22:03.684271 | orchestrator | Sunday 05 April 2026 00:22:03 +0000 (0:00:01.095) 0:00:09.602 ********** 2026-04-05 00:22:03.684289 | orchestrator | =============================================================================== 2026-04-05 00:22:03.684306 | orchestrator | Gathering Facts --------------------------------------------------------- 3.70s 2026-04-05 00:22:03.684326 | orchestrator | osism.commons.resolvconf : Restart systemd-resolved service ------------- 1.10s 2026-04-05 00:22:03.684346 | orchestrator | osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf --- 1.06s 2026-04-05 00:22:03.684365 | orchestrator | osism.commons.resolvconf : Copy configuration files --------------------- 1.06s 2026-04-05 00:22:03.684383 | orchestrator | osism.commons.resolvconf : Start/enable systemd-resolved service -------- 0.98s 2026-04-05 00:22:03.684401 | orchestrator | osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf ----- 0.54s 2026-04-05 00:22:03.684447 | orchestrator | osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf --- 0.53s 2026-04-05 00:22:03.684468 | orchestrator | osism.commons.resolvconf : Include resolvconf tasks --------------------- 0.09s 2026-04-05 00:22:03.684486 | orchestrator | osism.commons.resolvconf : Include distribution specific installation tasks --- 0.09s 2026-04-05 00:22:03.684502 | orchestrator | osism.commons.resolvconf : Include distribution specific configuration tasks --- 0.07s 2026-04-05 00:22:03.684513 | orchestrator | osism.commons.resolvconf : Archive existing file /etc/resolv.conf ------- 0.07s 2026-04-05 00:22:03.684524 | orchestrator | osism.commons.resolvconf : Check minimum and maximum number of name servers --- 0.06s 2026-04-05 00:22:03.684558 | orchestrator | osism.commons.resolvconf : Install package systemd-resolved ------------- 0.06s 2026-04-05 00:22:03.822961 | orchestrator | + osism apply sshconfig 2026-04-05 00:22:15.143999 | orchestrator | 2026-04-05 00:22:15 | INFO  | Prepare task for execution of sshconfig. 2026-04-05 00:22:15.213842 | orchestrator | 2026-04-05 00:22:15 | INFO  | Task e13dc044-ca9b-42d3-8518-fcfcb77fae7a (sshconfig) was prepared for execution. 2026-04-05 00:22:15.213939 | orchestrator | 2026-04-05 00:22:15 | INFO  | It takes a moment until task e13dc044-ca9b-42d3-8518-fcfcb77fae7a (sshconfig) has been started and output is visible here. 2026-04-05 00:22:26.164308 | orchestrator | 2026-04-05 00:22:26.164417 | orchestrator | PLAY [Apply role sshconfig] **************************************************** 2026-04-05 00:22:26.164432 | orchestrator | 2026-04-05 00:22:26.164444 | orchestrator | TASK [osism.commons.sshconfig : Get home directory of operator user] *********** 2026-04-05 00:22:26.164455 | orchestrator | Sunday 05 April 2026 00:22:18 +0000 (0:00:00.176) 0:00:00.176 ********** 2026-04-05 00:22:26.164465 | orchestrator | ok: [testbed-manager] 2026-04-05 00:22:26.164476 | orchestrator | 2026-04-05 00:22:26.164487 | orchestrator | TASK [osism.commons.sshconfig : Ensure .ssh/config.d exist] ******************** 2026-04-05 00:22:26.164574 | orchestrator | Sunday 05 April 2026 00:22:19 +0000 (0:00:00.889) 0:00:01.066 ********** 2026-04-05 00:22:26.164586 | orchestrator | changed: [testbed-manager] 2026-04-05 00:22:26.164597 | orchestrator | 2026-04-05 00:22:26.164607 | orchestrator | TASK [osism.commons.sshconfig : Ensure config for each host exist] ************* 2026-04-05 00:22:26.164616 | orchestrator | Sunday 05 April 2026 00:22:19 +0000 (0:00:00.491) 0:00:01.558 ********** 2026-04-05 00:22:26.164626 | orchestrator | changed: [testbed-manager] => (item=testbed-manager) 2026-04-05 00:22:26.164637 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0) 2026-04-05 00:22:26.164646 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1) 2026-04-05 00:22:26.164656 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2) 2026-04-05 00:22:26.164666 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3) 2026-04-05 00:22:26.164676 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4) 2026-04-05 00:22:26.164685 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5) 2026-04-05 00:22:26.164695 | orchestrator | 2026-04-05 00:22:26.164705 | orchestrator | TASK [osism.commons.sshconfig : Add extra config] ****************************** 2026-04-05 00:22:26.164715 | orchestrator | Sunday 05 April 2026 00:22:25 +0000 (0:00:05.665) 0:00:07.223 ********** 2026-04-05 00:22:26.164724 | orchestrator | skipping: [testbed-manager] 2026-04-05 00:22:26.164734 | orchestrator | 2026-04-05 00:22:26.164744 | orchestrator | TASK [osism.commons.sshconfig : Assemble ssh config] *************************** 2026-04-05 00:22:26.164753 | orchestrator | Sunday 05 April 2026 00:22:25 +0000 (0:00:00.128) 0:00:07.352 ********** 2026-04-05 00:22:26.164763 | orchestrator | changed: [testbed-manager] 2026-04-05 00:22:26.164773 | orchestrator | 2026-04-05 00:22:26.164784 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-05 00:22:26.164795 | orchestrator | testbed-manager : ok=4  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-05 00:22:26.164806 | orchestrator | 2026-04-05 00:22:26.164815 | orchestrator | 2026-04-05 00:22:26.164825 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-05 00:22:26.164835 | orchestrator | Sunday 05 April 2026 00:22:25 +0000 (0:00:00.588) 0:00:07.940 ********** 2026-04-05 00:22:26.164845 | orchestrator | =============================================================================== 2026-04-05 00:22:26.164869 | orchestrator | osism.commons.sshconfig : Ensure config for each host exist ------------- 5.67s 2026-04-05 00:22:26.164880 | orchestrator | osism.commons.sshconfig : Get home directory of operator user ----------- 0.89s 2026-04-05 00:22:26.164891 | orchestrator | osism.commons.sshconfig : Assemble ssh config --------------------------- 0.59s 2026-04-05 00:22:26.164902 | orchestrator | osism.commons.sshconfig : Ensure .ssh/config.d exist -------------------- 0.49s 2026-04-05 00:22:26.164913 | orchestrator | osism.commons.sshconfig : Add extra config ------------------------------ 0.13s 2026-04-05 00:22:26.371821 | orchestrator | + osism apply known-hosts 2026-04-05 00:22:37.775100 | orchestrator | 2026-04-05 00:22:37 | INFO  | Prepare task for execution of known-hosts. 2026-04-05 00:22:37.852277 | orchestrator | 2026-04-05 00:22:37 | INFO  | Task 9289294c-2fc3-4409-b09f-f6c06af9acd0 (known-hosts) was prepared for execution. 2026-04-05 00:22:37.852376 | orchestrator | 2026-04-05 00:22:37 | INFO  | It takes a moment until task 9289294c-2fc3-4409-b09f-f6c06af9acd0 (known-hosts) has been started and output is visible here. 2026-04-05 00:22:53.643424 | orchestrator | 2026-04-05 00:22:53.643573 | orchestrator | PLAY [Apply role known_hosts] ************************************************** 2026-04-05 00:22:53.643604 | orchestrator | 2026-04-05 00:22:53.643623 | orchestrator | TASK [osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname] *** 2026-04-05 00:22:53.643643 | orchestrator | Sunday 05 April 2026 00:22:41 +0000 (0:00:00.214) 0:00:00.214 ********** 2026-04-05 00:22:53.643664 | orchestrator | ok: [testbed-manager] => (item=testbed-manager) 2026-04-05 00:22:53.643685 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2026-04-05 00:22:53.643738 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2026-04-05 00:22:53.643760 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2026-04-05 00:22:53.643779 | orchestrator | ok: [testbed-manager] => (item=testbed-node-3) 2026-04-05 00:22:53.643797 | orchestrator | ok: [testbed-manager] => (item=testbed-node-4) 2026-04-05 00:22:53.643808 | orchestrator | ok: [testbed-manager] => (item=testbed-node-5) 2026-04-05 00:22:53.643819 | orchestrator | 2026-04-05 00:22:53.643830 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname] *** 2026-04-05 00:22:53.643842 | orchestrator | Sunday 05 April 2026 00:22:47 +0000 (0:00:06.567) 0:00:06.781 ********** 2026-04-05 00:22:53.643867 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-manager) 2026-04-05 00:22:53.643880 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-0) 2026-04-05 00:22:53.643892 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-1) 2026-04-05 00:22:53.643902 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-2) 2026-04-05 00:22:53.643913 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-3) 2026-04-05 00:22:53.643924 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-4) 2026-04-05 00:22:53.643935 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-5) 2026-04-05 00:22:53.643946 | orchestrator | 2026-04-05 00:22:53.643957 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-04-05 00:22:53.643968 | orchestrator | Sunday 05 April 2026 00:22:47 +0000 (0:00:00.161) 0:00:06.943 ********** 2026-04-05 00:22:53.643980 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBK5zxKHJ3MUZuxxCKaI/0+MKi9UbPucnEcyTKO5xKK6dfUjG/palJL9f1ktI9GmdX19Slg57beklOM69roCmakc=) 2026-04-05 00:22:53.643997 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCuMWfTYhYZfzNfEzLKERlk93PgTvB5tbsjT0mrU7INu3Y2/tHGEiqu9tbzHVdCxZ0IXexNPu2t4MGAB/Azcrcvt5gU8DDw0FXwZQdyafIKjiz/Zqp2qQ7iMyHmFM+BUMBjRbSALcmVmN0gGSDn22GbQ36xQV1kwThtaua6rd0FOeIuXOkJ1cocCjsj62YJQcf0945vcMIIUEh7SUIls/CBFYA73rZG2fJf/VeAyTa1PfTqdaYOKsrodIseMDy0grfnw7PDOoOfYKlAKfCGLMlDkmDUgEYG+JFrBAgDZ/l06uiUM9smX8PjFDNO6rlSFvFhk0LNdcsA/YTB/6WvJNiCiIoxhfDCUXB98Y9X5mc+ADtso/XMm3aBxQUbRmDns8DC2JXho8FEWZT4f0D9mqXPZV8c1WMWzUhYlI47Cup7ow1Y4GMnHV7WYfew9do5fWlu/G8iKbJ0yPXXCKfh6zK8GP48Czm8SOEaN/9U3mW+BvdhAPT8Y4NqMNKBq0WEquU=) 2026-04-05 00:22:53.644016 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIF1ZcUmNxITpQVEWJ9YqGk4UJUAmB9jQYLWqN3zRG0px) 2026-04-05 00:22:53.644029 | orchestrator | 2026-04-05 00:22:53.644040 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-04-05 00:22:53.644051 | orchestrator | Sunday 05 April 2026 00:22:49 +0000 (0:00:01.329) 0:00:08.272 ********** 2026-04-05 00:22:53.644083 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQChbd45RXaJQsGENR6XhXJeH1oaNQwinpUTzpxtYLl+90Mv7IFh334w/9pzM25iY2M9yZlihPYcIMYm0btwDfJSbjJo1oG+PJRsCl9NF5prF5A+DxjHVuHT8NY77WCNDL+10XAHAAVyw3Ln87Y8X8YiuGq8xjhuSGJBwuLKIBzgcLT9JuE02a/clGyIzyE+0O//DkWwdbKXvenVg9uvaZprmO6oDezpwgvlSpkBsUJLd289QRCh/D6t1xHR7anecuPY4kSmK7wISVU+exBwahcd08Dx1SmEQqPC3d6j7968FdQGxlCH8/R528n+Sgf1YATVaZSZ5mo0PCpR6RBtIWQkgQV7nDvWpc7rMN5SPhvsOz5DoED+/4W5Mv3G/2TchP8qHM796Oy4NPqe+FryeKvjxUhrSMW8qha/QzWozXIZtU3mxNVBCRMV3RMfwZbtz3usFChic5x6FMdgezFoCp4KISi0zVMdk/eHFVKNLZrPELGQmspE3dedJ2G+qoBWz8U=) 2026-04-05 00:22:53.644106 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIIu7mKzYYyR1omvw5dFN6w/PUfCLfs0IGuUN2Zd0C5l2) 2026-04-05 00:22:53.644118 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBEYXEjORAV8jCknk2GNr6n3sRj3cxyGjzhWfzyLZl7qqgIBPi+T3Xo79IwcSRI0w/HV3Q04guY2q5XH8I4F0434=) 2026-04-05 00:22:53.644129 | orchestrator | 2026-04-05 00:22:53.644140 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-04-05 00:22:53.644151 | orchestrator | Sunday 05 April 2026 00:22:50 +0000 (0:00:01.071) 0:00:09.344 ********** 2026-04-05 00:22:53.644163 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCrlY2TIPEo7WW9tRc3qyi6q8KbBHASQ/b5Se2rDFp3yh23ZmK10602Jx+vnFd7Q3nGKV2pzFABSItG7LSDDiOeHzEc3fHzpGPAYyXJo+habNTA1J4N3cKCsEZlecMnBFYKdMosHTn4sMAIhcVlBhZ4o2pVmeKqV71VWZ+rkhg+fFFBn270YkPR7whwIYHrH/80fOmAIX64ixTbuxQxpXBdSLHMl0tsMnVVakjVQ04ujeWehTV3kPvA2KdHSGC0q8+CoKlAbA2a63dR3+YyQPazNdoRgPV82Aozx0FOsn4yiPodilhr56bxnAWgtqxSxHcIBygqldNdtASq75SDPBpJmjK4/n+QRFI2z4fCefS8qq6cpGVw0lx8pv/gCaBBuq4Wg4gUOjWHdS9YD8nBQug8NciyKe5iFrHkihialT2zQIe7XPSt0YqWhwbYK1HVwqzha+By8rdzLyrTWZRKxIL1E4Tb7dOUVeWps+IBAwmw93X5mSLwX7IwFJ3oEEcL3yE=) 2026-04-05 00:22:53.644175 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBLA5OxH5o6gL3OGxDnSyn9ff7n2PFpuJA3jaJVMeOpy+Hjmoc+36dgQFIFDwPIqboqBos02yFeFtBxyH8pF8kGI=) 2026-04-05 00:22:53.644256 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIKjx2DeoyItLY4ciE34T/BLe+Ao5Y9jzzrpPLWl39UYQ) 2026-04-05 00:22:53.644268 | orchestrator | 2026-04-05 00:22:53.644279 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-04-05 00:22:53.644290 | orchestrator | Sunday 05 April 2026 00:22:51 +0000 (0:00:00.957) 0:00:10.302 ********** 2026-04-05 00:22:53.644301 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCrMwF/kR/jHE1C0D7l1GzwsYLEWRpjNtkwUFzu5oT/xMRyUqYw3+cmc34TlyXxGAD78+AlX/tr3swzJKJ7Oktuly3kw69b7JF6zuvOFIVOYNXWq4o621ps4TeX69FTox8s2PjX9LxP2uUnIDzFDUn978OIry1Xu9uvakkYXA8lWmnwC8BTdqjx/J1bGuKu6aKETjDDk4t5Kv70SpCPBcd7Ehr6yC6L5FSNDSOE5fc9D/ExgDmtWgMTOMbtJrDSXIast8XgIocpr5+MAN+S1LC9VyAdi+eQyRy2+QNiUdIZXRKnJc2JbFtlFRGuipF1BL2fd7EdnCUuqYH4xUf6n08EMvrzu5qtDAZYWYKZ+PpcbfxjaHuRSX/R1UH0xoYYs7zOQ8GGjCmqRmy7oMq6n17Mu+0RH88tkR6pn7WHixbby7A9Ag57doFKu6rHA2O4PcUqgwE1Il99alpxAgCYfokB3BE9OzPESJ+E9IGh71HrisbsG3n0Y7XaXrbV/kYHgIs=) 2026-04-05 00:22:53.644313 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBDhG1YR42egKASAuDEl8CUNOqR/qAX7eFeMAgay8bXtoEIEbgQsmRA+F01k9xLFEPndU/dS5h8bIb5/fhEsFhxE=) 2026-04-05 00:22:53.644324 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFE0bnVigoe+xljDZPjEdG60gucAewtZs29trfP7JHBg) 2026-04-05 00:22:53.644335 | orchestrator | 2026-04-05 00:22:53.644347 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-04-05 00:22:53.644358 | orchestrator | Sunday 05 April 2026 00:22:52 +0000 (0:00:00.971) 0:00:11.273 ********** 2026-04-05 00:22:53.644369 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIK9x1aQkDtZ7gWvknPGauj9A25MMoecy1A8eEb6rMsfR) 2026-04-05 00:22:53.644380 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDLxLhDpE8iU78EMCKZoma0yNwyd64uRvI4UTaXeConSS3eDbzd3N0029OnHwIIVOhl3vbrO62o2jZ9kvs2Gxt6LmU2uTX0UDSVdPljLUv6TZ6/8U3o4mdGKBKFyneJtGBMFN+xsSQW7OPohOCqZQlP72k+LrxTwiUIq0wGk0+L9fm8dc/NONLmV103KTP7bFhrt3oTkg7I1OVJ8anXwLP6/+YlA2bkIjFZqGHNfF0MvwCe7FZttK5e08JHszr/NcDpnAwyyVE0ykihtmh79V4C57jXhgBnTBUvYSke1RCk99Lz1Zro0fD0+KhEPf1wzdiECVQvTS+9LnMq9KGQa/ujFjz3OCftmKq86wwHok6q5Wiw8S2hTCdENVvjU39l03ZekJHdXFuv8XDzptLNslNcOKXV5COqNp6Ww/Z3B2HrWwX8Y/wXbwG3RYHR19YKtoDwiMhJKW3RHrb7Af/yhmq/icA3KzLKcGtT3giP7t4/nP8Whg9U1DySNdsbc1+UURU=) 2026-04-05 00:22:53.644399 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBDa7GKC7anbKfo2ccOFG+0EcU+tmTWz+T27mKGnOY9Hv4oNvcYY0LqMtgbrrFTRGTRqUj1gcTc1qUPvnQ05bVS8=) 2026-04-05 00:22:53.644410 | orchestrator | 2026-04-05 00:22:53.644421 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-04-05 00:22:53.644432 | orchestrator | Sunday 05 April 2026 00:22:53 +0000 (0:00:01.051) 0:00:12.324 ********** 2026-04-05 00:22:53.644453 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQD244qiNTUH7tAUw0i272nEGxKSEAgxkyXbSi+vgfO6E8tJdKQUkYmXgATWCiypvH62NdR+2g6NKax2cXnkxMikDabUFI4J+U1sSRuJndaw+PfBE/NC5sj6tyTa9Xb+5Lc/NVNBXzE7f3lkfWa7N6U+sjLjoYZ26wc2sEgWeDAB77dwYZuwl+07I7oIPexBircyX2OquBWRRlDHO7jOlHdg0IFzFMVomxoNvOIvX0pQFY/NMZ5BekkxLt2DbU9vhGXDIpe+cwUsnbMLblZtnESlTeF/MZSP7BELNnxs+3kr2MX8uuEXzgrKNt9vGZ/A5XgSN10iF8fovXJ7iufrJ+yQobXNJFAsn5+8NMlPsX+UPjBjFm1ONAc3SOiaCpvVVUuxccsJcDMneH+rYL3SN5bt4LIZ/cQACbL6HNMzC27nuZwJj8w7iLfPeWa8aZUhJ9RL8vaOrmGejrnud26+4xWRpwBLCpxEbZye5Gto3CutNg41G7W4Nr7xrhDSlnJ6uM8=) 2026-04-05 00:23:05.555824 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIJR89/sYWe8PzZ8v0gBqHEbQ6yqHfiKBBNFySBGWPHhd) 2026-04-05 00:23:05.555931 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBKxo4qxALLXH9Cq0wBLK/sHvnDCBwX8civHbwd8j8LMI6XD5kibRQJgNi0eXs1HjL0XwwpKNe0/z27S0EEGwCxs=) 2026-04-05 00:23:05.555949 | orchestrator | 2026-04-05 00:23:05.555961 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-04-05 00:23:05.555974 | orchestrator | Sunday 05 April 2026 00:22:54 +0000 (0:00:01.202) 0:00:13.527 ********** 2026-04-05 00:23:05.555988 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDNcryG/wtWbMfZQbg6QP4l7b5D9zvocC2TTNV65jycsQSjPy9/rHwwXWQy8PgaYa2c9eHXksZQietNKxpq/W1N5WcwRFurdaCsZL47ZQnt2pmkbEG5SLPrfwPqnKqAhRlOFsQ4mZccdy+reWHV7DawGdBjBjr3unhfKskNdm56WggIk8xRwwjJxgXfv+8p0Pshxw1q55EEII9YRikUAv2+SOZS9pjZu/F1FFZ7nqIsOyYT8MF4ZqgnbvxomcOCCUlQiVUgzwyW9dNT56fECfJcLRJaLpqjg0WaTtoEZgXg2vPz87YkNtqZSvcIBBVd9OBqdlNY9ljdg3qO/mlU/yZl7t33REKISyQI6OO3RFanQ29oF5e49lQBWg79S3Oa4qYschAX4y/fFj23D9Lh4bv3I0KmVfsrWbfRNAiyf5tYxzMkMiWW7B4nTbjj/RRPravw4wW20CfSLAzypXZIFbzG+KTHIz5jH0j9lc2Oc2OuM75YkzM/33h7d2v5uQpS3yk=) 2026-04-05 00:23:05.556012 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBCuvJ3tWxTZD1VAyFgakuuyFlxbQadaRjNjDB7i+Kjjo29zMg90TdsZu2kGucviNwdr8ZGb4/CX16TgPHiSW0ZY=) 2026-04-05 00:23:05.556031 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIPECKfgW6dAk6hej3K6KxZe/tFoe8vwLftDTXuQRuMyT) 2026-04-05 00:23:05.556050 | orchestrator | 2026-04-05 00:23:05.556069 | orchestrator | TASK [osism.commons.known_hosts : Run ssh-keyscan for all hosts with ansible_host] *** 2026-04-05 00:23:05.556089 | orchestrator | Sunday 05 April 2026 00:22:55 +0000 (0:00:01.107) 0:00:14.635 ********** 2026-04-05 00:23:05.556108 | orchestrator | ok: [testbed-manager] => (item=testbed-manager) 2026-04-05 00:23:05.556128 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2026-04-05 00:23:05.556145 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2026-04-05 00:23:05.556166 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2026-04-05 00:23:05.556187 | orchestrator | ok: [testbed-manager] => (item=testbed-node-3) 2026-04-05 00:23:05.556226 | orchestrator | ok: [testbed-manager] => (item=testbed-node-4) 2026-04-05 00:23:05.556261 | orchestrator | ok: [testbed-manager] => (item=testbed-node-5) 2026-04-05 00:23:05.556273 | orchestrator | 2026-04-05 00:23:05.556284 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with ansible_host] *** 2026-04-05 00:23:05.556296 | orchestrator | Sunday 05 April 2026 00:23:00 +0000 (0:00:05.447) 0:00:20.083 ********** 2026-04-05 00:23:05.556309 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-manager) 2026-04-05 00:23:05.556322 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-0) 2026-04-05 00:23:05.556333 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-1) 2026-04-05 00:23:05.556345 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-2) 2026-04-05 00:23:05.556359 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-3) 2026-04-05 00:23:05.556372 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-4) 2026-04-05 00:23:05.556385 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-5) 2026-04-05 00:23:05.556398 | orchestrator | 2026-04-05 00:23:05.556411 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-04-05 00:23:05.556424 | orchestrator | Sunday 05 April 2026 00:23:01 +0000 (0:00:00.181) 0:00:20.265 ********** 2026-04-05 00:23:05.556437 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIF1ZcUmNxITpQVEWJ9YqGk4UJUAmB9jQYLWqN3zRG0px) 2026-04-05 00:23:05.556475 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCuMWfTYhYZfzNfEzLKERlk93PgTvB5tbsjT0mrU7INu3Y2/tHGEiqu9tbzHVdCxZ0IXexNPu2t4MGAB/Azcrcvt5gU8DDw0FXwZQdyafIKjiz/Zqp2qQ7iMyHmFM+BUMBjRbSALcmVmN0gGSDn22GbQ36xQV1kwThtaua6rd0FOeIuXOkJ1cocCjsj62YJQcf0945vcMIIUEh7SUIls/CBFYA73rZG2fJf/VeAyTa1PfTqdaYOKsrodIseMDy0grfnw7PDOoOfYKlAKfCGLMlDkmDUgEYG+JFrBAgDZ/l06uiUM9smX8PjFDNO6rlSFvFhk0LNdcsA/YTB/6WvJNiCiIoxhfDCUXB98Y9X5mc+ADtso/XMm3aBxQUbRmDns8DC2JXho8FEWZT4f0D9mqXPZV8c1WMWzUhYlI47Cup7ow1Y4GMnHV7WYfew9do5fWlu/G8iKbJ0yPXXCKfh6zK8GP48Czm8SOEaN/9U3mW+BvdhAPT8Y4NqMNKBq0WEquU=) 2026-04-05 00:23:05.556491 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBK5zxKHJ3MUZuxxCKaI/0+MKi9UbPucnEcyTKO5xKK6dfUjG/palJL9f1ktI9GmdX19Slg57beklOM69roCmakc=) 2026-04-05 00:23:05.556542 | orchestrator | 2026-04-05 00:23:05.556557 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-04-05 00:23:05.556570 | orchestrator | Sunday 05 April 2026 00:23:02 +0000 (0:00:01.098) 0:00:21.363 ********** 2026-04-05 00:23:05.556583 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIIu7mKzYYyR1omvw5dFN6w/PUfCLfs0IGuUN2Zd0C5l2) 2026-04-05 00:23:05.556597 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQChbd45RXaJQsGENR6XhXJeH1oaNQwinpUTzpxtYLl+90Mv7IFh334w/9pzM25iY2M9yZlihPYcIMYm0btwDfJSbjJo1oG+PJRsCl9NF5prF5A+DxjHVuHT8NY77WCNDL+10XAHAAVyw3Ln87Y8X8YiuGq8xjhuSGJBwuLKIBzgcLT9JuE02a/clGyIzyE+0O//DkWwdbKXvenVg9uvaZprmO6oDezpwgvlSpkBsUJLd289QRCh/D6t1xHR7anecuPY4kSmK7wISVU+exBwahcd08Dx1SmEQqPC3d6j7968FdQGxlCH8/R528n+Sgf1YATVaZSZ5mo0PCpR6RBtIWQkgQV7nDvWpc7rMN5SPhvsOz5DoED+/4W5Mv3G/2TchP8qHM796Oy4NPqe+FryeKvjxUhrSMW8qha/QzWozXIZtU3mxNVBCRMV3RMfwZbtz3usFChic5x6FMdgezFoCp4KISi0zVMdk/eHFVKNLZrPELGQmspE3dedJ2G+qoBWz8U=) 2026-04-05 00:23:05.556619 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBEYXEjORAV8jCknk2GNr6n3sRj3cxyGjzhWfzyLZl7qqgIBPi+T3Xo79IwcSRI0w/HV3Q04guY2q5XH8I4F0434=) 2026-04-05 00:23:05.556631 | orchestrator | 2026-04-05 00:23:05.556644 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-04-05 00:23:05.556657 | orchestrator | Sunday 05 April 2026 00:23:03 +0000 (0:00:01.115) 0:00:22.479 ********** 2026-04-05 00:23:05.556671 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCrlY2TIPEo7WW9tRc3qyi6q8KbBHASQ/b5Se2rDFp3yh23ZmK10602Jx+vnFd7Q3nGKV2pzFABSItG7LSDDiOeHzEc3fHzpGPAYyXJo+habNTA1J4N3cKCsEZlecMnBFYKdMosHTn4sMAIhcVlBhZ4o2pVmeKqV71VWZ+rkhg+fFFBn270YkPR7whwIYHrH/80fOmAIX64ixTbuxQxpXBdSLHMl0tsMnVVakjVQ04ujeWehTV3kPvA2KdHSGC0q8+CoKlAbA2a63dR3+YyQPazNdoRgPV82Aozx0FOsn4yiPodilhr56bxnAWgtqxSxHcIBygqldNdtASq75SDPBpJmjK4/n+QRFI2z4fCefS8qq6cpGVw0lx8pv/gCaBBuq4Wg4gUOjWHdS9YD8nBQug8NciyKe5iFrHkihialT2zQIe7XPSt0YqWhwbYK1HVwqzha+By8rdzLyrTWZRKxIL1E4Tb7dOUVeWps+IBAwmw93X5mSLwX7IwFJ3oEEcL3yE=) 2026-04-05 00:23:05.556685 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBLA5OxH5o6gL3OGxDnSyn9ff7n2PFpuJA3jaJVMeOpy+Hjmoc+36dgQFIFDwPIqboqBos02yFeFtBxyH8pF8kGI=) 2026-04-05 00:23:05.556699 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIKjx2DeoyItLY4ciE34T/BLe+Ao5Y9jzzrpPLWl39UYQ) 2026-04-05 00:23:05.556711 | orchestrator | 2026-04-05 00:23:05.556724 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-04-05 00:23:05.556736 | orchestrator | Sunday 05 April 2026 00:23:04 +0000 (0:00:01.121) 0:00:23.601 ********** 2026-04-05 00:23:05.556747 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBDhG1YR42egKASAuDEl8CUNOqR/qAX7eFeMAgay8bXtoEIEbgQsmRA+F01k9xLFEPndU/dS5h8bIb5/fhEsFhxE=) 2026-04-05 00:23:05.556765 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCrMwF/kR/jHE1C0D7l1GzwsYLEWRpjNtkwUFzu5oT/xMRyUqYw3+cmc34TlyXxGAD78+AlX/tr3swzJKJ7Oktuly3kw69b7JF6zuvOFIVOYNXWq4o621ps4TeX69FTox8s2PjX9LxP2uUnIDzFDUn978OIry1Xu9uvakkYXA8lWmnwC8BTdqjx/J1bGuKu6aKETjDDk4t5Kv70SpCPBcd7Ehr6yC6L5FSNDSOE5fc9D/ExgDmtWgMTOMbtJrDSXIast8XgIocpr5+MAN+S1LC9VyAdi+eQyRy2+QNiUdIZXRKnJc2JbFtlFRGuipF1BL2fd7EdnCUuqYH4xUf6n08EMvrzu5qtDAZYWYKZ+PpcbfxjaHuRSX/R1UH0xoYYs7zOQ8GGjCmqRmy7oMq6n17Mu+0RH88tkR6pn7WHixbby7A9Ag57doFKu6rHA2O4PcUqgwE1Il99alpxAgCYfokB3BE9OzPESJ+E9IGh71HrisbsG3n0Y7XaXrbV/kYHgIs=) 2026-04-05 00:23:05.556789 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFE0bnVigoe+xljDZPjEdG60gucAewtZs29trfP7JHBg) 2026-04-05 00:23:09.985658 | orchestrator | 2026-04-05 00:23:09.985756 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-04-05 00:23:09.985768 | orchestrator | Sunday 05 April 2026 00:23:05 +0000 (0:00:01.078) 0:00:24.679 ********** 2026-04-05 00:23:09.985776 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIK9x1aQkDtZ7gWvknPGauj9A25MMoecy1A8eEb6rMsfR) 2026-04-05 00:23:09.985801 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDLxLhDpE8iU78EMCKZoma0yNwyd64uRvI4UTaXeConSS3eDbzd3N0029OnHwIIVOhl3vbrO62o2jZ9kvs2Gxt6LmU2uTX0UDSVdPljLUv6TZ6/8U3o4mdGKBKFyneJtGBMFN+xsSQW7OPohOCqZQlP72k+LrxTwiUIq0wGk0+L9fm8dc/NONLmV103KTP7bFhrt3oTkg7I1OVJ8anXwLP6/+YlA2bkIjFZqGHNfF0MvwCe7FZttK5e08JHszr/NcDpnAwyyVE0ykihtmh79V4C57jXhgBnTBUvYSke1RCk99Lz1Zro0fD0+KhEPf1wzdiECVQvTS+9LnMq9KGQa/ujFjz3OCftmKq86wwHok6q5Wiw8S2hTCdENVvjU39l03ZekJHdXFuv8XDzptLNslNcOKXV5COqNp6Ww/Z3B2HrWwX8Y/wXbwG3RYHR19YKtoDwiMhJKW3RHrb7Af/yhmq/icA3KzLKcGtT3giP7t4/nP8Whg9U1DySNdsbc1+UURU=) 2026-04-05 00:23:09.985831 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBDa7GKC7anbKfo2ccOFG+0EcU+tmTWz+T27mKGnOY9Hv4oNvcYY0LqMtgbrrFTRGTRqUj1gcTc1qUPvnQ05bVS8=) 2026-04-05 00:23:09.985841 | orchestrator | 2026-04-05 00:23:09.985847 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-04-05 00:23:09.985854 | orchestrator | Sunday 05 April 2026 00:23:06 +0000 (0:00:01.077) 0:00:25.757 ********** 2026-04-05 00:23:09.985861 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQD244qiNTUH7tAUw0i272nEGxKSEAgxkyXbSi+vgfO6E8tJdKQUkYmXgATWCiypvH62NdR+2g6NKax2cXnkxMikDabUFI4J+U1sSRuJndaw+PfBE/NC5sj6tyTa9Xb+5Lc/NVNBXzE7f3lkfWa7N6U+sjLjoYZ26wc2sEgWeDAB77dwYZuwl+07I7oIPexBircyX2OquBWRRlDHO7jOlHdg0IFzFMVomxoNvOIvX0pQFY/NMZ5BekkxLt2DbU9vhGXDIpe+cwUsnbMLblZtnESlTeF/MZSP7BELNnxs+3kr2MX8uuEXzgrKNt9vGZ/A5XgSN10iF8fovXJ7iufrJ+yQobXNJFAsn5+8NMlPsX+UPjBjFm1ONAc3SOiaCpvVVUuxccsJcDMneH+rYL3SN5bt4LIZ/cQACbL6HNMzC27nuZwJj8w7iLfPeWa8aZUhJ9RL8vaOrmGejrnud26+4xWRpwBLCpxEbZye5Gto3CutNg41G7W4Nr7xrhDSlnJ6uM8=) 2026-04-05 00:23:09.985868 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBKxo4qxALLXH9Cq0wBLK/sHvnDCBwX8civHbwd8j8LMI6XD5kibRQJgNi0eXs1HjL0XwwpKNe0/z27S0EEGwCxs=) 2026-04-05 00:23:09.985875 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIJR89/sYWe8PzZ8v0gBqHEbQ6yqHfiKBBNFySBGWPHhd) 2026-04-05 00:23:09.985881 | orchestrator | 2026-04-05 00:23:09.985888 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-04-05 00:23:09.985894 | orchestrator | Sunday 05 April 2026 00:23:07 +0000 (0:00:01.126) 0:00:26.884 ********** 2026-04-05 00:23:09.985900 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDNcryG/wtWbMfZQbg6QP4l7b5D9zvocC2TTNV65jycsQSjPy9/rHwwXWQy8PgaYa2c9eHXksZQietNKxpq/W1N5WcwRFurdaCsZL47ZQnt2pmkbEG5SLPrfwPqnKqAhRlOFsQ4mZccdy+reWHV7DawGdBjBjr3unhfKskNdm56WggIk8xRwwjJxgXfv+8p0Pshxw1q55EEII9YRikUAv2+SOZS9pjZu/F1FFZ7nqIsOyYT8MF4ZqgnbvxomcOCCUlQiVUgzwyW9dNT56fECfJcLRJaLpqjg0WaTtoEZgXg2vPz87YkNtqZSvcIBBVd9OBqdlNY9ljdg3qO/mlU/yZl7t33REKISyQI6OO3RFanQ29oF5e49lQBWg79S3Oa4qYschAX4y/fFj23D9Lh4bv3I0KmVfsrWbfRNAiyf5tYxzMkMiWW7B4nTbjj/RRPravw4wW20CfSLAzypXZIFbzG+KTHIz5jH0j9lc2Oc2OuM75YkzM/33h7d2v5uQpS3yk=) 2026-04-05 00:23:09.985907 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBCuvJ3tWxTZD1VAyFgakuuyFlxbQadaRjNjDB7i+Kjjo29zMg90TdsZu2kGucviNwdr8ZGb4/CX16TgPHiSW0ZY=) 2026-04-05 00:23:09.985914 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIPECKfgW6dAk6hej3K6KxZe/tFoe8vwLftDTXuQRuMyT) 2026-04-05 00:23:09.985921 | orchestrator | 2026-04-05 00:23:09.985927 | orchestrator | TASK [osism.commons.known_hosts : Write static known_hosts entries] ************ 2026-04-05 00:23:09.985933 | orchestrator | Sunday 05 April 2026 00:23:08 +0000 (0:00:01.140) 0:00:28.024 ********** 2026-04-05 00:23:09.985941 | orchestrator | skipping: [testbed-manager] => (item=testbed-manager)  2026-04-05 00:23:09.985949 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2026-04-05 00:23:09.985956 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2026-04-05 00:23:09.985962 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2026-04-05 00:23:09.985969 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-3)  2026-04-05 00:23:09.985975 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-4)  2026-04-05 00:23:09.985982 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-5)  2026-04-05 00:23:09.985989 | orchestrator | skipping: [testbed-manager] 2026-04-05 00:23:09.985996 | orchestrator | 2026-04-05 00:23:09.986077 | orchestrator | TASK [osism.commons.known_hosts : Write extra known_hosts entries] ************* 2026-04-05 00:23:09.986087 | orchestrator | Sunday 05 April 2026 00:23:09 +0000 (0:00:00.214) 0:00:28.239 ********** 2026-04-05 00:23:09.986102 | orchestrator | skipping: [testbed-manager] 2026-04-05 00:23:09.986108 | orchestrator | 2026-04-05 00:23:09.986114 | orchestrator | TASK [osism.commons.known_hosts : Delete known_hosts entries] ****************** 2026-04-05 00:23:09.986121 | orchestrator | Sunday 05 April 2026 00:23:09 +0000 (0:00:00.057) 0:00:28.297 ********** 2026-04-05 00:23:09.986127 | orchestrator | skipping: [testbed-manager] 2026-04-05 00:23:09.986133 | orchestrator | 2026-04-05 00:23:09.986139 | orchestrator | TASK [osism.commons.known_hosts : Set file permissions] ************************ 2026-04-05 00:23:09.986146 | orchestrator | Sunday 05 April 2026 00:23:09 +0000 (0:00:00.067) 0:00:28.364 ********** 2026-04-05 00:23:09.986152 | orchestrator | changed: [testbed-manager] 2026-04-05 00:23:09.986159 | orchestrator | 2026-04-05 00:23:09.986165 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-05 00:23:09.986173 | orchestrator | testbed-manager : ok=31  changed=15  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-04-05 00:23:09.986180 | orchestrator | 2026-04-05 00:23:09.986186 | orchestrator | 2026-04-05 00:23:09.986193 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-05 00:23:09.986199 | orchestrator | Sunday 05 April 2026 00:23:09 +0000 (0:00:00.506) 0:00:28.871 ********** 2026-04-05 00:23:09.986205 | orchestrator | =============================================================================== 2026-04-05 00:23:09.986211 | orchestrator | osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname --- 6.57s 2026-04-05 00:23:09.986218 | orchestrator | osism.commons.known_hosts : Run ssh-keyscan for all hosts with ansible_host --- 5.45s 2026-04-05 00:23:09.986226 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.33s 2026-04-05 00:23:09.986233 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.20s 2026-04-05 00:23:09.986239 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.14s 2026-04-05 00:23:09.986245 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.13s 2026-04-05 00:23:09.986252 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.12s 2026-04-05 00:23:09.986259 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.12s 2026-04-05 00:23:09.986266 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.11s 2026-04-05 00:23:09.986272 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.10s 2026-04-05 00:23:09.986280 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.08s 2026-04-05 00:23:09.986286 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.08s 2026-04-05 00:23:09.986293 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.07s 2026-04-05 00:23:09.986299 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.05s 2026-04-05 00:23:09.986315 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 0.97s 2026-04-05 00:23:09.986322 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 0.96s 2026-04-05 00:23:09.986329 | orchestrator | osism.commons.known_hosts : Set file permissions ------------------------ 0.51s 2026-04-05 00:23:09.986335 | orchestrator | osism.commons.known_hosts : Write static known_hosts entries ------------ 0.21s 2026-04-05 00:23:09.986342 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with ansible_host --- 0.18s 2026-04-05 00:23:09.986350 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname --- 0.16s 2026-04-05 00:23:10.189582 | orchestrator | + osism apply squid 2026-04-05 00:23:21.602702 | orchestrator | 2026-04-05 00:23:21 | INFO  | Prepare task for execution of squid. 2026-04-05 00:23:21.687236 | orchestrator | 2026-04-05 00:23:21 | INFO  | Task 16a12d98-341e-4378-8046-47da8cd07451 (squid) was prepared for execution. 2026-04-05 00:23:21.687363 | orchestrator | 2026-04-05 00:23:21 | INFO  | It takes a moment until task 16a12d98-341e-4378-8046-47da8cd07451 (squid) has been started and output is visible here. 2026-04-05 00:25:23.600809 | orchestrator | 2026-04-05 00:25:23.600959 | orchestrator | PLAY [Apply role squid] ******************************************************** 2026-04-05 00:25:23.600978 | orchestrator | 2026-04-05 00:25:23.600991 | orchestrator | TASK [osism.services.squid : Include install tasks] **************************** 2026-04-05 00:25:23.601003 | orchestrator | Sunday 05 April 2026 00:23:25 +0000 (0:00:00.217) 0:00:00.217 ********** 2026-04-05 00:25:23.601015 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/squid/tasks/install-Debian-family.yml for testbed-manager 2026-04-05 00:25:23.601027 | orchestrator | 2026-04-05 00:25:23.601039 | orchestrator | TASK [osism.services.squid : Install required packages] ************************ 2026-04-05 00:25:23.601050 | orchestrator | Sunday 05 April 2026 00:23:25 +0000 (0:00:00.094) 0:00:00.312 ********** 2026-04-05 00:25:23.601061 | orchestrator | ok: [testbed-manager] 2026-04-05 00:25:23.601074 | orchestrator | 2026-04-05 00:25:23.601085 | orchestrator | TASK [osism.services.squid : Create required directories] ********************** 2026-04-05 00:25:23.601096 | orchestrator | Sunday 05 April 2026 00:23:27 +0000 (0:00:02.515) 0:00:02.827 ********** 2026-04-05 00:25:23.601108 | orchestrator | changed: [testbed-manager] => (item=/opt/squid/configuration) 2026-04-05 00:25:23.601119 | orchestrator | changed: [testbed-manager] => (item=/opt/squid/configuration/conf.d) 2026-04-05 00:25:23.601130 | orchestrator | ok: [testbed-manager] => (item=/opt/squid) 2026-04-05 00:25:23.601141 | orchestrator | 2026-04-05 00:25:23.601152 | orchestrator | TASK [osism.services.squid : Copy squid configuration files] ******************* 2026-04-05 00:25:23.601169 | orchestrator | Sunday 05 April 2026 00:23:28 +0000 (0:00:01.315) 0:00:04.143 ********** 2026-04-05 00:25:23.601190 | orchestrator | changed: [testbed-manager] => (item=osism.conf) 2026-04-05 00:25:23.601210 | orchestrator | 2026-04-05 00:25:23.601228 | orchestrator | TASK [osism.services.squid : Remove osism_allow_list.conf configuration file] *** 2026-04-05 00:25:23.601246 | orchestrator | Sunday 05 April 2026 00:23:30 +0000 (0:00:01.072) 0:00:05.215 ********** 2026-04-05 00:25:23.601264 | orchestrator | ok: [testbed-manager] 2026-04-05 00:25:23.601283 | orchestrator | 2026-04-05 00:25:23.601331 | orchestrator | TASK [osism.services.squid : Copy docker-compose.yml file] ********************* 2026-04-05 00:25:23.601352 | orchestrator | Sunday 05 April 2026 00:23:30 +0000 (0:00:00.349) 0:00:05.565 ********** 2026-04-05 00:25:23.601363 | orchestrator | changed: [testbed-manager] 2026-04-05 00:25:23.601375 | orchestrator | 2026-04-05 00:25:23.601386 | orchestrator | TASK [osism.services.squid : Manage squid service] ***************************** 2026-04-05 00:25:23.601396 | orchestrator | Sunday 05 April 2026 00:23:31 +0000 (0:00:00.862) 0:00:06.427 ********** 2026-04-05 00:25:23.601407 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage squid service (10 retries left). 2026-04-05 00:25:23.601420 | orchestrator | ok: [testbed-manager] 2026-04-05 00:25:23.601431 | orchestrator | 2026-04-05 00:25:23.601442 | orchestrator | RUNNING HANDLER [osism.services.squid : Restart squid service] ***************** 2026-04-05 00:25:23.601482 | orchestrator | Sunday 05 April 2026 00:24:10 +0000 (0:00:39.255) 0:00:45.683 ********** 2026-04-05 00:25:23.601493 | orchestrator | changed: [testbed-manager] 2026-04-05 00:25:23.601504 | orchestrator | 2026-04-05 00:25:23.601515 | orchestrator | RUNNING HANDLER [osism.services.squid : Wait for squid service to start] ******* 2026-04-05 00:25:23.601526 | orchestrator | Sunday 05 April 2026 00:24:22 +0000 (0:00:12.048) 0:00:57.732 ********** 2026-04-05 00:25:23.601537 | orchestrator | Pausing for 60 seconds 2026-04-05 00:25:23.601549 | orchestrator | changed: [testbed-manager] 2026-04-05 00:25:23.601560 | orchestrator | 2026-04-05 00:25:23.601571 | orchestrator | RUNNING HANDLER [osism.services.squid : Register that squid service was restarted] *** 2026-04-05 00:25:23.601582 | orchestrator | Sunday 05 April 2026 00:25:22 +0000 (0:01:00.109) 0:01:57.841 ********** 2026-04-05 00:25:23.601593 | orchestrator | ok: [testbed-manager] 2026-04-05 00:25:23.601604 | orchestrator | 2026-04-05 00:25:23.601615 | orchestrator | RUNNING HANDLER [osism.services.squid : Wait for an healthy squid service] ***** 2026-04-05 00:25:23.601654 | orchestrator | Sunday 05 April 2026 00:25:22 +0000 (0:00:00.075) 0:01:57.917 ********** 2026-04-05 00:25:23.601666 | orchestrator | changed: [testbed-manager] 2026-04-05 00:25:23.601677 | orchestrator | 2026-04-05 00:25:23.601688 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-05 00:25:23.601699 | orchestrator | testbed-manager : ok=11  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-05 00:25:23.601710 | orchestrator | 2026-04-05 00:25:23.601720 | orchestrator | 2026-04-05 00:25:23.601731 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-05 00:25:23.601742 | orchestrator | Sunday 05 April 2026 00:25:23 +0000 (0:00:00.645) 0:01:58.562 ********** 2026-04-05 00:25:23.601753 | orchestrator | =============================================================================== 2026-04-05 00:25:23.601764 | orchestrator | osism.services.squid : Wait for squid service to start ----------------- 60.11s 2026-04-05 00:25:23.601775 | orchestrator | osism.services.squid : Manage squid service ---------------------------- 39.26s 2026-04-05 00:25:23.601785 | orchestrator | osism.services.squid : Restart squid service --------------------------- 12.05s 2026-04-05 00:25:23.601796 | orchestrator | osism.services.squid : Install required packages ------------------------ 2.52s 2026-04-05 00:25:23.601807 | orchestrator | osism.services.squid : Create required directories ---------------------- 1.32s 2026-04-05 00:25:23.601818 | orchestrator | osism.services.squid : Copy squid configuration files ------------------- 1.07s 2026-04-05 00:25:23.601828 | orchestrator | osism.services.squid : Copy docker-compose.yml file --------------------- 0.86s 2026-04-05 00:25:23.601839 | orchestrator | osism.services.squid : Wait for an healthy squid service ---------------- 0.65s 2026-04-05 00:25:23.601850 | orchestrator | osism.services.squid : Remove osism_allow_list.conf configuration file --- 0.35s 2026-04-05 00:25:23.601861 | orchestrator | osism.services.squid : Include install tasks ---------------------------- 0.09s 2026-04-05 00:25:23.601871 | orchestrator | osism.services.squid : Register that squid service was restarted -------- 0.08s 2026-04-05 00:25:23.800729 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2026-04-05 00:25:23.800838 | orchestrator | + /opt/configuration/scripts/set-kolla-namespace.sh kolla 2026-04-05 00:25:23.804977 | orchestrator | + set -e 2026-04-05 00:25:23.805036 | orchestrator | + NAMESPACE=kolla 2026-04-05 00:25:23.805047 | orchestrator | + sed -i 's#docker_namespace: .*#docker_namespace: kolla#g' /opt/configuration/inventory/group_vars/all/kolla.yml 2026-04-05 00:25:23.810622 | orchestrator | ++ semver latest 9.0.0 2026-04-05 00:25:23.864763 | orchestrator | + [[ -1 -lt 0 ]] 2026-04-05 00:25:23.864881 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2026-04-05 00:25:23.866293 | orchestrator | + osism apply operator -u ubuntu -l testbed-nodes 2026-04-05 00:25:35.212775 | orchestrator | 2026-04-05 00:25:35 | INFO  | Prepare task for execution of operator. 2026-04-05 00:25:35.294415 | orchestrator | 2026-04-05 00:25:35 | INFO  | Task 8a0b203b-de32-46be-9a0a-7a4a720c2873 (operator) was prepared for execution. 2026-04-05 00:25:35.294595 | orchestrator | 2026-04-05 00:25:35 | INFO  | It takes a moment until task 8a0b203b-de32-46be-9a0a-7a4a720c2873 (operator) has been started and output is visible here. 2026-04-05 00:25:51.233734 | orchestrator | 2026-04-05 00:25:51.233880 | orchestrator | PLAY [Make ssh pipelining working] ********************************************* 2026-04-05 00:25:51.233909 | orchestrator | 2026-04-05 00:25:51.233930 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-04-05 00:25:51.233950 | orchestrator | Sunday 05 April 2026 00:25:38 +0000 (0:00:00.199) 0:00:00.199 ********** 2026-04-05 00:25:51.233971 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:25:51.233991 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:25:51.234008 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:25:51.234088 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:25:51.234100 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:25:51.234115 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:25:51.234127 | orchestrator | 2026-04-05 00:25:51.234138 | orchestrator | TASK [Do not require tty for all users] **************************************** 2026-04-05 00:25:51.234177 | orchestrator | Sunday 05 April 2026 00:25:42 +0000 (0:00:04.242) 0:00:04.442 ********** 2026-04-05 00:25:51.234223 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:25:51.234235 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:25:51.234248 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:25:51.234262 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:25:51.234275 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:25:51.234287 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:25:51.234299 | orchestrator | 2026-04-05 00:25:51.234311 | orchestrator | PLAY [Apply role operator] ***************************************************** 2026-04-05 00:25:51.234323 | orchestrator | 2026-04-05 00:25:51.234340 | orchestrator | TASK [osism.commons.operator : Gather variables for each operating system] ***** 2026-04-05 00:25:51.234360 | orchestrator | Sunday 05 April 2026 00:25:43 +0000 (0:00:00.863) 0:00:05.305 ********** 2026-04-05 00:25:51.234381 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:25:51.234401 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:25:51.234416 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:25:51.234452 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:25:51.234465 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:25:51.234477 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:25:51.234490 | orchestrator | 2026-04-05 00:25:51.234503 | orchestrator | TASK [osism.commons.operator : Set operator_groups variable to default value] *** 2026-04-05 00:25:51.234517 | orchestrator | Sunday 05 April 2026 00:25:43 +0000 (0:00:00.172) 0:00:05.478 ********** 2026-04-05 00:25:51.234530 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:25:51.234542 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:25:51.234555 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:25:51.234567 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:25:51.234580 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:25:51.234593 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:25:51.234605 | orchestrator | 2026-04-05 00:25:51.234635 | orchestrator | TASK [osism.commons.operator : Create operator group] ************************** 2026-04-05 00:25:51.234647 | orchestrator | Sunday 05 April 2026 00:25:44 +0000 (0:00:00.168) 0:00:05.646 ********** 2026-04-05 00:25:51.234658 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:25:51.234670 | orchestrator | changed: [testbed-node-4] 2026-04-05 00:25:51.234681 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:25:51.234692 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:25:51.234703 | orchestrator | changed: [testbed-node-3] 2026-04-05 00:25:51.234714 | orchestrator | changed: [testbed-node-5] 2026-04-05 00:25:51.234724 | orchestrator | 2026-04-05 00:25:51.234736 | orchestrator | TASK [osism.commons.operator : Create user] ************************************ 2026-04-05 00:25:51.234747 | orchestrator | Sunday 05 April 2026 00:25:44 +0000 (0:00:00.705) 0:00:06.351 ********** 2026-04-05 00:25:51.234758 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:25:51.234768 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:25:51.234779 | orchestrator | changed: [testbed-node-3] 2026-04-05 00:25:51.234790 | orchestrator | changed: [testbed-node-4] 2026-04-05 00:25:51.234801 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:25:51.234812 | orchestrator | changed: [testbed-node-5] 2026-04-05 00:25:51.234823 | orchestrator | 2026-04-05 00:25:51.234834 | orchestrator | TASK [osism.commons.operator : Add user to additional groups] ****************** 2026-04-05 00:25:51.234845 | orchestrator | Sunday 05 April 2026 00:25:45 +0000 (0:00:00.861) 0:00:07.213 ********** 2026-04-05 00:25:51.234856 | orchestrator | changed: [testbed-node-0] => (item=adm) 2026-04-05 00:25:51.234867 | orchestrator | changed: [testbed-node-1] => (item=adm) 2026-04-05 00:25:51.234878 | orchestrator | changed: [testbed-node-2] => (item=adm) 2026-04-05 00:25:51.234889 | orchestrator | changed: [testbed-node-3] => (item=adm) 2026-04-05 00:25:51.234900 | orchestrator | changed: [testbed-node-4] => (item=adm) 2026-04-05 00:25:51.234911 | orchestrator | changed: [testbed-node-5] => (item=adm) 2026-04-05 00:25:51.234921 | orchestrator | changed: [testbed-node-1] => (item=sudo) 2026-04-05 00:25:51.234932 | orchestrator | changed: [testbed-node-2] => (item=sudo) 2026-04-05 00:25:51.234954 | orchestrator | changed: [testbed-node-0] => (item=sudo) 2026-04-05 00:25:51.234965 | orchestrator | changed: [testbed-node-3] => (item=sudo) 2026-04-05 00:25:51.234975 | orchestrator | changed: [testbed-node-5] => (item=sudo) 2026-04-05 00:25:51.234986 | orchestrator | changed: [testbed-node-4] => (item=sudo) 2026-04-05 00:25:51.234997 | orchestrator | 2026-04-05 00:25:51.235008 | orchestrator | TASK [osism.commons.operator : Copy user sudoers file] ************************* 2026-04-05 00:25:51.235019 | orchestrator | Sunday 05 April 2026 00:25:46 +0000 (0:00:01.124) 0:00:08.337 ********** 2026-04-05 00:25:51.235030 | orchestrator | changed: [testbed-node-3] 2026-04-05 00:25:51.235041 | orchestrator | changed: [testbed-node-4] 2026-04-05 00:25:51.235052 | orchestrator | changed: [testbed-node-5] 2026-04-05 00:25:51.235063 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:25:51.235074 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:25:51.235084 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:25:51.235095 | orchestrator | 2026-04-05 00:25:51.235106 | orchestrator | TASK [osism.commons.operator : Set language variables in .bashrc configuration file] *** 2026-04-05 00:25:51.235118 | orchestrator | Sunday 05 April 2026 00:25:47 +0000 (0:00:01.260) 0:00:09.597 ********** 2026-04-05 00:25:51.235129 | orchestrator | changed: [testbed-node-3] => (item=export LANGUAGE=C.UTF-8) 2026-04-05 00:25:51.235140 | orchestrator | changed: [testbed-node-4] => (item=export LANGUAGE=C.UTF-8) 2026-04-05 00:25:51.235151 | orchestrator | changed: [testbed-node-0] => (item=export LANGUAGE=C.UTF-8) 2026-04-05 00:25:51.235162 | orchestrator | changed: [testbed-node-2] => (item=export LANGUAGE=C.UTF-8) 2026-04-05 00:25:51.235174 | orchestrator | changed: [testbed-node-5] => (item=export LANGUAGE=C.UTF-8) 2026-04-05 00:25:51.235206 | orchestrator | changed: [testbed-node-1] => (item=export LANGUAGE=C.UTF-8) 2026-04-05 00:25:51.235218 | orchestrator | changed: [testbed-node-3] => (item=export LANG=C.UTF-8) 2026-04-05 00:25:51.235229 | orchestrator | changed: [testbed-node-0] => (item=export LANG=C.UTF-8) 2026-04-05 00:25:51.235240 | orchestrator | changed: [testbed-node-1] => (item=export LANG=C.UTF-8) 2026-04-05 00:25:51.235251 | orchestrator | changed: [testbed-node-5] => (item=export LANG=C.UTF-8) 2026-04-05 00:25:51.235262 | orchestrator | changed: [testbed-node-2] => (item=export LANG=C.UTF-8) 2026-04-05 00:25:51.235273 | orchestrator | changed: [testbed-node-4] => (item=export LANG=C.UTF-8) 2026-04-05 00:25:51.235284 | orchestrator | changed: [testbed-node-3] => (item=export LC_ALL=C.UTF-8) 2026-04-05 00:25:51.235294 | orchestrator | [WARNING]: Module remote_tmp /root/.ansible/tmp did not exist and was created 2026-04-05 00:25:51.235305 | orchestrator | with a mode of 0700, this may cause issues when running as another user. To 2026-04-05 00:25:51.235322 | orchestrator | avoid this, create the remote_tmp dir with the correct permissions manually 2026-04-05 00:25:51.235333 | orchestrator | changed: [testbed-node-0] => (item=export LC_ALL=C.UTF-8) 2026-04-05 00:25:51.235344 | orchestrator | changed: [testbed-node-1] => (item=export LC_ALL=C.UTF-8) 2026-04-05 00:25:51.235355 | orchestrator | changed: [testbed-node-2] => (item=export LC_ALL=C.UTF-8) 2026-04-05 00:25:51.235366 | orchestrator | changed: [testbed-node-4] => (item=export LC_ALL=C.UTF-8) 2026-04-05 00:25:51.235376 | orchestrator | changed: [testbed-node-5] => (item=export LC_ALL=C.UTF-8) 2026-04-05 00:25:51.235387 | orchestrator | 2026-04-05 00:25:51.235398 | orchestrator | TASK [osism.commons.operator : Set custom environment variables in .bashrc configuration file] *** 2026-04-05 00:25:51.235410 | orchestrator | Sunday 05 April 2026 00:25:49 +0000 (0:00:01.198) 0:00:10.796 ********** 2026-04-05 00:25:51.235421 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:25:51.235449 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:25:51.235461 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:25:51.235472 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:25:51.235483 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:25:51.235493 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:25:51.235504 | orchestrator | 2026-04-05 00:25:51.235515 | orchestrator | TASK [osism.commons.operator : Set custom PS1 prompt in .bashrc configuration file] *** 2026-04-05 00:25:51.235575 | orchestrator | Sunday 05 April 2026 00:25:49 +0000 (0:00:00.158) 0:00:10.955 ********** 2026-04-05 00:25:51.235586 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:25:51.235597 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:25:51.235608 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:25:51.235619 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:25:51.235629 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:25:51.235640 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:25:51.235651 | orchestrator | 2026-04-05 00:25:51.235662 | orchestrator | TASK [osism.commons.operator : Create .ssh directory] ************************** 2026-04-05 00:25:51.235673 | orchestrator | Sunday 05 April 2026 00:25:49 +0000 (0:00:00.171) 0:00:11.126 ********** 2026-04-05 00:25:51.235684 | orchestrator | changed: [testbed-node-4] 2026-04-05 00:25:51.235695 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:25:51.235706 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:25:51.235717 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:25:51.235728 | orchestrator | changed: [testbed-node-3] 2026-04-05 00:25:51.235738 | orchestrator | changed: [testbed-node-5] 2026-04-05 00:25:51.235749 | orchestrator | 2026-04-05 00:25:51.235760 | orchestrator | TASK [osism.commons.operator : Check number of SSH authorized keys] ************ 2026-04-05 00:25:51.235771 | orchestrator | Sunday 05 April 2026 00:25:50 +0000 (0:00:00.540) 0:00:11.667 ********** 2026-04-05 00:25:51.235782 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:25:51.235793 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:25:51.235804 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:25:51.235814 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:25:51.235825 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:25:51.235836 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:25:51.235847 | orchestrator | 2026-04-05 00:25:51.235857 | orchestrator | TASK [osism.commons.operator : Set ssh authorized keys] ************************ 2026-04-05 00:25:51.235869 | orchestrator | Sunday 05 April 2026 00:25:50 +0000 (0:00:00.206) 0:00:11.873 ********** 2026-04-05 00:25:51.235880 | orchestrator | changed: [testbed-node-1] => (item=None) 2026-04-05 00:25:51.235891 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-04-05 00:25:51.235901 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:25:51.235912 | orchestrator | changed: [testbed-node-4] 2026-04-05 00:25:51.235923 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-04-05 00:25:51.235934 | orchestrator | changed: [testbed-node-2] => (item=None) 2026-04-05 00:25:51.235945 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:25:51.235955 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:25:51.235966 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-04-05 00:25:51.235977 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-04-05 00:25:51.235988 | orchestrator | changed: [testbed-node-5] 2026-04-05 00:25:51.235999 | orchestrator | changed: [testbed-node-3] 2026-04-05 00:25:51.236009 | orchestrator | 2026-04-05 00:25:51.236020 | orchestrator | TASK [osism.commons.operator : Delete ssh authorized keys] ********************* 2026-04-05 00:25:51.236032 | orchestrator | Sunday 05 April 2026 00:25:50 +0000 (0:00:00.674) 0:00:12.548 ********** 2026-04-05 00:25:51.236043 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:25:51.236054 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:25:51.236064 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:25:51.236075 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:25:51.236086 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:25:51.236097 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:25:51.236108 | orchestrator | 2026-04-05 00:25:51.236119 | orchestrator | TASK [osism.commons.operator : Set authorized GitHub accounts] ***************** 2026-04-05 00:25:51.236130 | orchestrator | Sunday 05 April 2026 00:25:51 +0000 (0:00:00.153) 0:00:12.702 ********** 2026-04-05 00:25:51.236140 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:25:51.236152 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:25:51.236163 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:25:51.236180 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:25:51.236198 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:25:52.481065 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:25:52.481201 | orchestrator | 2026-04-05 00:25:52.481222 | orchestrator | TASK [osism.commons.operator : Delete authorized GitHub accounts] ************** 2026-04-05 00:25:52.481243 | orchestrator | Sunday 05 April 2026 00:25:51 +0000 (0:00:00.169) 0:00:12.871 ********** 2026-04-05 00:25:52.481261 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:25:52.481279 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:25:52.481311 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:25:52.481343 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:25:52.481361 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:25:52.481379 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:25:52.481397 | orchestrator | 2026-04-05 00:25:52.481415 | orchestrator | TASK [osism.commons.operator : Set password] *********************************** 2026-04-05 00:25:52.481496 | orchestrator | Sunday 05 April 2026 00:25:51 +0000 (0:00:00.149) 0:00:13.021 ********** 2026-04-05 00:25:52.481510 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:25:52.481521 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:25:52.481532 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:25:52.481543 | orchestrator | changed: [testbed-node-3] 2026-04-05 00:25:52.481554 | orchestrator | changed: [testbed-node-4] 2026-04-05 00:25:52.481565 | orchestrator | changed: [testbed-node-5] 2026-04-05 00:25:52.481575 | orchestrator | 2026-04-05 00:25:52.481587 | orchestrator | TASK [osism.commons.operator : Unset & lock password] ************************** 2026-04-05 00:25:52.481599 | orchestrator | Sunday 05 April 2026 00:25:52 +0000 (0:00:00.652) 0:00:13.674 ********** 2026-04-05 00:25:52.481613 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:25:52.481625 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:25:52.481638 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:25:52.481650 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:25:52.481663 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:25:52.481676 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:25:52.481688 | orchestrator | 2026-04-05 00:25:52.481700 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-05 00:25:52.481715 | orchestrator | testbed-node-0 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-04-05 00:25:52.481729 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-04-05 00:25:52.481762 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-04-05 00:25:52.481775 | orchestrator | testbed-node-3 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-04-05 00:25:52.481788 | orchestrator | testbed-node-4 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-04-05 00:25:52.481801 | orchestrator | testbed-node-5 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-04-05 00:25:52.481814 | orchestrator | 2026-04-05 00:25:52.481826 | orchestrator | 2026-04-05 00:25:52.481837 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-05 00:25:52.481848 | orchestrator | Sunday 05 April 2026 00:25:52 +0000 (0:00:00.227) 0:00:13.901 ********** 2026-04-05 00:25:52.481859 | orchestrator | =============================================================================== 2026-04-05 00:25:52.481870 | orchestrator | Gathering Facts --------------------------------------------------------- 4.24s 2026-04-05 00:25:52.481881 | orchestrator | osism.commons.operator : Copy user sudoers file ------------------------- 1.26s 2026-04-05 00:25:52.481892 | orchestrator | osism.commons.operator : Set language variables in .bashrc configuration file --- 1.20s 2026-04-05 00:25:52.481925 | orchestrator | osism.commons.operator : Add user to additional groups ------------------ 1.12s 2026-04-05 00:25:52.481936 | orchestrator | Do not require tty for all users ---------------------------------------- 0.86s 2026-04-05 00:25:52.481947 | orchestrator | osism.commons.operator : Create user ------------------------------------ 0.86s 2026-04-05 00:25:52.481958 | orchestrator | osism.commons.operator : Create operator group -------------------------- 0.71s 2026-04-05 00:25:52.481968 | orchestrator | osism.commons.operator : Set ssh authorized keys ------------------------ 0.67s 2026-04-05 00:25:52.481979 | orchestrator | osism.commons.operator : Set password ----------------------------------- 0.65s 2026-04-05 00:25:52.481990 | orchestrator | osism.commons.operator : Create .ssh directory -------------------------- 0.54s 2026-04-05 00:25:52.482001 | orchestrator | osism.commons.operator : Unset & lock password -------------------------- 0.23s 2026-04-05 00:25:52.482072 | orchestrator | osism.commons.operator : Check number of SSH authorized keys ------------ 0.21s 2026-04-05 00:25:52.482088 | orchestrator | osism.commons.operator : Gather variables for each operating system ----- 0.17s 2026-04-05 00:25:52.482100 | orchestrator | osism.commons.operator : Set custom PS1 prompt in .bashrc configuration file --- 0.17s 2026-04-05 00:25:52.482111 | orchestrator | osism.commons.operator : Set authorized GitHub accounts ----------------- 0.17s 2026-04-05 00:25:52.482121 | orchestrator | osism.commons.operator : Set operator_groups variable to default value --- 0.17s 2026-04-05 00:25:52.482132 | orchestrator | osism.commons.operator : Set custom environment variables in .bashrc configuration file --- 0.16s 2026-04-05 00:25:52.482143 | orchestrator | osism.commons.operator : Delete ssh authorized keys --------------------- 0.15s 2026-04-05 00:25:52.482154 | orchestrator | osism.commons.operator : Delete authorized GitHub accounts -------------- 0.15s 2026-04-05 00:25:52.669692 | orchestrator | + osism apply --environment custom facts 2026-04-05 00:25:54.062302 | orchestrator | 2026-04-05 00:25:54 | INFO  | Trying to run play facts in environment custom 2026-04-05 00:26:04.233643 | orchestrator | 2026-04-05 00:26:04 | INFO  | Prepare task for execution of facts. 2026-04-05 00:26:04.325081 | orchestrator | 2026-04-05 00:26:04 | INFO  | Task c7f823df-3594-42ba-a04d-9f5d5a338506 (facts) was prepared for execution. 2026-04-05 00:26:04.325182 | orchestrator | 2026-04-05 00:26:04 | INFO  | It takes a moment until task c7f823df-3594-42ba-a04d-9f5d5a338506 (facts) has been started and output is visible here. 2026-04-05 00:26:45.296386 | orchestrator | 2026-04-05 00:26:45.296523 | orchestrator | PLAY [Copy custom network devices fact] **************************************** 2026-04-05 00:26:45.296539 | orchestrator | 2026-04-05 00:26:45.296551 | orchestrator | TASK [Create custom facts directory] ******************************************* 2026-04-05 00:26:45.296579 | orchestrator | Sunday 05 April 2026 00:26:07 +0000 (0:00:00.120) 0:00:00.120 ********** 2026-04-05 00:26:45.296591 | orchestrator | changed: [testbed-node-4] 2026-04-05 00:26:45.296604 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:26:45.296615 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:26:45.296627 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:26:45.296638 | orchestrator | changed: [testbed-node-5] 2026-04-05 00:26:45.296649 | orchestrator | changed: [testbed-node-3] 2026-04-05 00:26:45.296660 | orchestrator | ok: [testbed-manager] 2026-04-05 00:26:45.296671 | orchestrator | 2026-04-05 00:26:45.296682 | orchestrator | TASK [Copy fact file] ********************************************************** 2026-04-05 00:26:45.296693 | orchestrator | Sunday 05 April 2026 00:26:08 +0000 (0:00:01.421) 0:00:01.541 ********** 2026-04-05 00:26:45.296704 | orchestrator | ok: [testbed-manager] 2026-04-05 00:26:45.296715 | orchestrator | changed: [testbed-node-3] 2026-04-05 00:26:45.296726 | orchestrator | changed: [testbed-node-4] 2026-04-05 00:26:45.296738 | orchestrator | changed: [testbed-node-5] 2026-04-05 00:26:45.296749 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:26:45.296760 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:26:45.296770 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:26:45.296802 | orchestrator | 2026-04-05 00:26:45.296813 | orchestrator | PLAY [Copy custom ceph devices facts] ****************************************** 2026-04-05 00:26:45.296824 | orchestrator | 2026-04-05 00:26:45.296835 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2026-04-05 00:26:45.296846 | orchestrator | Sunday 05 April 2026 00:26:10 +0000 (0:00:01.238) 0:00:02.780 ********** 2026-04-05 00:26:45.296856 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:26:45.296867 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:26:45.296878 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:26:45.296889 | orchestrator | 2026-04-05 00:26:45.296899 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2026-04-05 00:26:45.296911 | orchestrator | Sunday 05 April 2026 00:26:10 +0000 (0:00:00.104) 0:00:02.884 ********** 2026-04-05 00:26:45.296924 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:26:45.296936 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:26:45.296948 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:26:45.296960 | orchestrator | 2026-04-05 00:26:45.296973 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2026-04-05 00:26:45.296986 | orchestrator | Sunday 05 April 2026 00:26:10 +0000 (0:00:00.194) 0:00:03.078 ********** 2026-04-05 00:26:45.296998 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:26:45.297010 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:26:45.297022 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:26:45.297035 | orchestrator | 2026-04-05 00:26:45.297047 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2026-04-05 00:26:45.297059 | orchestrator | Sunday 05 April 2026 00:26:10 +0000 (0:00:00.205) 0:00:03.283 ********** 2026-04-05 00:26:45.297074 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-05 00:26:45.297089 | orchestrator | 2026-04-05 00:26:45.297102 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2026-04-05 00:26:45.297115 | orchestrator | Sunday 05 April 2026 00:26:10 +0000 (0:00:00.156) 0:00:03.440 ********** 2026-04-05 00:26:45.297127 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:26:45.297139 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:26:45.297152 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:26:45.297164 | orchestrator | 2026-04-05 00:26:45.297175 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2026-04-05 00:26:45.297186 | orchestrator | Sunday 05 April 2026 00:26:11 +0000 (0:00:00.407) 0:00:03.847 ********** 2026-04-05 00:26:45.297196 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:26:45.297207 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:26:45.297218 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:26:45.297229 | orchestrator | 2026-04-05 00:26:45.297240 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2026-04-05 00:26:45.297250 | orchestrator | Sunday 05 April 2026 00:26:11 +0000 (0:00:00.142) 0:00:03.989 ********** 2026-04-05 00:26:45.297261 | orchestrator | changed: [testbed-node-3] 2026-04-05 00:26:45.297272 | orchestrator | changed: [testbed-node-4] 2026-04-05 00:26:45.297283 | orchestrator | changed: [testbed-node-5] 2026-04-05 00:26:45.297293 | orchestrator | 2026-04-05 00:26:45.297304 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2026-04-05 00:26:45.297315 | orchestrator | Sunday 05 April 2026 00:26:12 +0000 (0:00:01.032) 0:00:05.022 ********** 2026-04-05 00:26:45.297326 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:26:45.297337 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:26:45.297347 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:26:45.297359 | orchestrator | 2026-04-05 00:26:45.297370 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2026-04-05 00:26:45.297380 | orchestrator | Sunday 05 April 2026 00:26:12 +0000 (0:00:00.461) 0:00:05.484 ********** 2026-04-05 00:26:45.297416 | orchestrator | changed: [testbed-node-3] 2026-04-05 00:26:45.297428 | orchestrator | changed: [testbed-node-4] 2026-04-05 00:26:45.297439 | orchestrator | changed: [testbed-node-5] 2026-04-05 00:26:45.297457 | orchestrator | 2026-04-05 00:26:45.297468 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2026-04-05 00:26:45.297479 | orchestrator | Sunday 05 April 2026 00:26:13 +0000 (0:00:00.997) 0:00:06.482 ********** 2026-04-05 00:26:45.297490 | orchestrator | changed: [testbed-node-3] 2026-04-05 00:26:45.297501 | orchestrator | changed: [testbed-node-4] 2026-04-05 00:26:45.297511 | orchestrator | changed: [testbed-node-5] 2026-04-05 00:26:45.297522 | orchestrator | 2026-04-05 00:26:45.297533 | orchestrator | TASK [Install required packages (RedHat)] ************************************** 2026-04-05 00:26:45.297544 | orchestrator | Sunday 05 April 2026 00:26:29 +0000 (0:00:15.428) 0:00:21.910 ********** 2026-04-05 00:26:45.297555 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:26:45.297565 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:26:45.297576 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:26:45.297587 | orchestrator | 2026-04-05 00:26:45.297598 | orchestrator | TASK [Install required packages (Debian)] ************************************** 2026-04-05 00:26:45.297627 | orchestrator | Sunday 05 April 2026 00:26:29 +0000 (0:00:00.097) 0:00:22.008 ********** 2026-04-05 00:26:45.297639 | orchestrator | changed: [testbed-node-5] 2026-04-05 00:26:45.297650 | orchestrator | changed: [testbed-node-3] 2026-04-05 00:26:45.297661 | orchestrator | changed: [testbed-node-4] 2026-04-05 00:26:45.297672 | orchestrator | 2026-04-05 00:26:45.297683 | orchestrator | TASK [Create custom facts directory] ******************************************* 2026-04-05 00:26:45.297694 | orchestrator | Sunday 05 April 2026 00:26:36 +0000 (0:00:07.376) 0:00:29.385 ********** 2026-04-05 00:26:45.297705 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:26:45.297716 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:26:45.297727 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:26:45.297738 | orchestrator | 2026-04-05 00:26:45.297749 | orchestrator | TASK [Copy fact files] ********************************************************* 2026-04-05 00:26:45.297760 | orchestrator | Sunday 05 April 2026 00:26:37 +0000 (0:00:00.431) 0:00:29.816 ********** 2026-04-05 00:26:45.297771 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_devices) 2026-04-05 00:26:45.297783 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_devices) 2026-04-05 00:26:45.297794 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_devices) 2026-04-05 00:26:45.297821 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_devices_all) 2026-04-05 00:26:45.297832 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_devices_all) 2026-04-05 00:26:45.297855 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_devices_all) 2026-04-05 00:26:45.297866 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_osd_devices) 2026-04-05 00:26:45.297876 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_osd_devices) 2026-04-05 00:26:45.297887 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_osd_devices) 2026-04-05 00:26:45.297898 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_osd_devices_all) 2026-04-05 00:26:45.297909 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_osd_devices_all) 2026-04-05 00:26:45.297920 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_osd_devices_all) 2026-04-05 00:26:45.297931 | orchestrator | 2026-04-05 00:26:45.297942 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2026-04-05 00:26:45.297953 | orchestrator | Sunday 05 April 2026 00:26:40 +0000 (0:00:03.468) 0:00:33.285 ********** 2026-04-05 00:26:45.297964 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:26:45.297975 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:26:45.297986 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:26:45.297996 | orchestrator | 2026-04-05 00:26:45.298328 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-04-05 00:26:45.298349 | orchestrator | 2026-04-05 00:26:45.298360 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-04-05 00:26:45.298372 | orchestrator | Sunday 05 April 2026 00:26:41 +0000 (0:00:01.205) 0:00:34.490 ********** 2026-04-05 00:26:45.298412 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:26:45.298424 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:26:45.298435 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:26:45.298446 | orchestrator | ok: [testbed-manager] 2026-04-05 00:26:45.298456 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:26:45.298467 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:26:45.298478 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:26:45.298489 | orchestrator | 2026-04-05 00:26:45.298500 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-05 00:26:45.298559 | orchestrator | testbed-manager : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-05 00:26:45.298572 | orchestrator | testbed-node-0 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-05 00:26:45.298584 | orchestrator | testbed-node-1 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-05 00:26:45.298595 | orchestrator | testbed-node-2 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-05 00:26:45.298606 | orchestrator | testbed-node-3 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-05 00:26:45.298618 | orchestrator | testbed-node-4 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-05 00:26:45.298629 | orchestrator | testbed-node-5 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-05 00:26:45.298639 | orchestrator | 2026-04-05 00:26:45.298650 | orchestrator | 2026-04-05 00:26:45.298661 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-05 00:26:45.298672 | orchestrator | Sunday 05 April 2026 00:26:45 +0000 (0:00:03.476) 0:00:37.967 ********** 2026-04-05 00:26:45.298683 | orchestrator | =============================================================================== 2026-04-05 00:26:45.298694 | orchestrator | osism.commons.repository : Update package cache ------------------------ 15.43s 2026-04-05 00:26:45.298705 | orchestrator | Install required packages (Debian) -------------------------------------- 7.38s 2026-04-05 00:26:45.298716 | orchestrator | Gathers facts about hosts ----------------------------------------------- 3.48s 2026-04-05 00:26:45.298726 | orchestrator | Copy fact files --------------------------------------------------------- 3.47s 2026-04-05 00:26:45.298737 | orchestrator | Create custom facts directory ------------------------------------------- 1.42s 2026-04-05 00:26:45.298748 | orchestrator | Copy fact file ---------------------------------------------------------- 1.24s 2026-04-05 00:26:45.298771 | orchestrator | osism.commons.repository : Force update of package cache ---------------- 1.21s 2026-04-05 00:26:45.522703 | orchestrator | osism.commons.repository : Copy 99osism apt configuration --------------- 1.03s 2026-04-05 00:26:45.522810 | orchestrator | osism.commons.repository : Copy ubuntu.sources file --------------------- 1.00s 2026-04-05 00:26:45.522825 | orchestrator | osism.commons.repository : Remove sources.list file --------------------- 0.46s 2026-04-05 00:26:45.522838 | orchestrator | Create custom facts directory ------------------------------------------- 0.43s 2026-04-05 00:26:45.522852 | orchestrator | osism.commons.repository : Create /etc/apt/sources.list.d directory ----- 0.41s 2026-04-05 00:26:45.522866 | orchestrator | osism.commons.repository : Set repositories to default ------------------ 0.20s 2026-04-05 00:26:45.522878 | orchestrator | osism.commons.repository : Set repository_default fact to default value --- 0.19s 2026-04-05 00:26:45.522892 | orchestrator | osism.commons.repository : Include distribution specific repository tasks --- 0.16s 2026-04-05 00:26:45.522906 | orchestrator | osism.commons.repository : Include tasks for Ubuntu < 24.04 ------------- 0.14s 2026-04-05 00:26:45.522919 | orchestrator | osism.commons.repository : Gather variables for each operating system --- 0.10s 2026-04-05 00:26:45.522959 | orchestrator | Install required packages (RedHat) -------------------------------------- 0.10s 2026-04-05 00:26:45.739206 | orchestrator | + osism apply bootstrap 2026-04-05 00:26:57.124821 | orchestrator | 2026-04-05 00:26:57 | INFO  | Prepare task for execution of bootstrap. 2026-04-05 00:26:57.207761 | orchestrator | 2026-04-05 00:26:57 | INFO  | Task 0986a77c-7ae8-43e1-966a-1806bf920eab (bootstrap) was prepared for execution. 2026-04-05 00:26:57.207855 | orchestrator | 2026-04-05 00:26:57 | INFO  | It takes a moment until task 0986a77c-7ae8-43e1-966a-1806bf920eab (bootstrap) has been started and output is visible here. 2026-04-05 00:27:12.856821 | orchestrator | 2026-04-05 00:27:12.856938 | orchestrator | PLAY [Group hosts based on state bootstrap] ************************************ 2026-04-05 00:27:12.856956 | orchestrator | 2026-04-05 00:27:12.856968 | orchestrator | TASK [Group hosts based on state bootstrap] ************************************ 2026-04-05 00:27:12.856979 | orchestrator | Sunday 05 April 2026 00:27:00 +0000 (0:00:00.195) 0:00:00.195 ********** 2026-04-05 00:27:12.856991 | orchestrator | ok: [testbed-manager] 2026-04-05 00:27:12.857002 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:27:12.857013 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:27:12.857024 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:27:12.857035 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:27:12.857046 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:27:12.857056 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:27:12.857067 | orchestrator | 2026-04-05 00:27:12.857078 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-04-05 00:27:12.857089 | orchestrator | 2026-04-05 00:27:12.857099 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-04-05 00:27:12.857110 | orchestrator | Sunday 05 April 2026 00:27:00 +0000 (0:00:00.344) 0:00:00.540 ********** 2026-04-05 00:27:12.857122 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:27:12.857133 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:27:12.857144 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:27:12.857155 | orchestrator | ok: [testbed-manager] 2026-04-05 00:27:12.857165 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:27:12.857176 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:27:12.857186 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:27:12.857197 | orchestrator | 2026-04-05 00:27:12.857208 | orchestrator | PLAY [Gather facts for all hosts (if using --limit)] *************************** 2026-04-05 00:27:12.857219 | orchestrator | 2026-04-05 00:27:12.857229 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-04-05 00:27:12.857240 | orchestrator | Sunday 05 April 2026 00:27:05 +0000 (0:00:04.543) 0:00:05.084 ********** 2026-04-05 00:27:12.857252 | orchestrator | skipping: [testbed-manager] => (item=testbed-manager)  2026-04-05 00:27:12.857263 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2026-04-05 00:27:12.857274 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2026-04-05 00:27:12.857284 | orchestrator | skipping: [testbed-node-0] => (item=testbed-manager)  2026-04-05 00:27:12.857295 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2026-04-05 00:27:12.857306 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-3)  2026-04-05 00:27:12.857317 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-04-05 00:27:12.857327 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-04-05 00:27:12.857338 | orchestrator | skipping: [testbed-node-1] => (item=testbed-manager)  2026-04-05 00:27:12.857349 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-4)  2026-04-05 00:27:12.857431 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-04-05 00:27:12.857445 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-04-05 00:27:12.857456 | orchestrator | skipping: [testbed-node-2] => (item=testbed-manager)  2026-04-05 00:27:12.857467 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-04-05 00:27:12.857478 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2026-04-05 00:27:12.857515 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-04-05 00:27:12.857527 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-5)  2026-04-05 00:27:12.857538 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-04-05 00:27:12.857548 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2026-04-05 00:27:12.857559 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-04-05 00:27:12.857570 | orchestrator | skipping: [testbed-manager] 2026-04-05 00:27:12.857580 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2026-04-05 00:27:12.857592 | orchestrator | skipping: [testbed-node-3] => (item=testbed-manager)  2026-04-05 00:27:12.857612 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2026-04-05 00:27:12.857632 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-04-05 00:27:12.857661 | orchestrator | skipping: [testbed-node-4] => (item=testbed-manager)  2026-04-05 00:27:12.857700 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2026-04-05 00:27:12.857719 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:27:12.857738 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-04-05 00:27:12.857756 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2026-04-05 00:27:12.857776 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-04-05 00:27:12.857793 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2026-04-05 00:27:12.857811 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:27:12.857830 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-04-05 00:27:12.857849 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2026-04-05 00:27:12.857868 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-04-05 00:27:12.857890 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-04-05 00:27:12.857910 | orchestrator | skipping: [testbed-node-5] => (item=testbed-manager)  2026-04-05 00:27:12.857930 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2026-04-05 00:27:12.857950 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:27:12.857963 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-04-05 00:27:12.857974 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-04-05 00:27:12.857984 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-05 00:27:12.857995 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-04-05 00:27:12.858006 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-04-05 00:27:12.858082 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-05 00:27:12.858174 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-04-05 00:27:12.858199 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-04-05 00:27:12.858218 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-05 00:27:12.858236 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:27:12.858254 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-04-05 00:27:12.858272 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-04-05 00:27:12.858289 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:27:12.858308 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-04-05 00:27:12.858328 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-04-05 00:27:12.858346 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:27:12.858396 | orchestrator | 2026-04-05 00:27:12.858416 | orchestrator | PLAY [Apply bootstrap roles part 1] ******************************************** 2026-04-05 00:27:12.858434 | orchestrator | 2026-04-05 00:27:12.858452 | orchestrator | TASK [osism.commons.hostname : Set hostname] *********************************** 2026-04-05 00:27:12.858464 | orchestrator | Sunday 05 April 2026 00:27:06 +0000 (0:00:00.549) 0:00:05.633 ********** 2026-04-05 00:27:12.858475 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:27:12.858500 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:27:12.858511 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:27:12.858522 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:27:12.858533 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:27:12.858543 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:27:12.858554 | orchestrator | ok: [testbed-manager] 2026-04-05 00:27:12.858565 | orchestrator | 2026-04-05 00:27:12.858575 | orchestrator | TASK [osism.commons.hostname : Copy /etc/hostname] ***************************** 2026-04-05 00:27:12.858586 | orchestrator | Sunday 05 April 2026 00:27:07 +0000 (0:00:01.212) 0:00:06.846 ********** 2026-04-05 00:27:12.858597 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:27:12.858608 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:27:12.858619 | orchestrator | ok: [testbed-manager] 2026-04-05 00:27:12.858629 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:27:12.858640 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:27:12.858651 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:27:12.858661 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:27:12.858672 | orchestrator | 2026-04-05 00:27:12.858683 | orchestrator | TASK [osism.commons.hosts : Include type specific tasks] *********************** 2026-04-05 00:27:12.858694 | orchestrator | Sunday 05 April 2026 00:27:08 +0000 (0:00:01.290) 0:00:08.137 ********** 2026-04-05 00:27:12.858706 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/hosts/tasks/type-template.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-05 00:27:12.858719 | orchestrator | 2026-04-05 00:27:12.858730 | orchestrator | TASK [osism.commons.hosts : Copy /etc/hosts file] ****************************** 2026-04-05 00:27:12.858741 | orchestrator | Sunday 05 April 2026 00:27:08 +0000 (0:00:00.293) 0:00:08.430 ********** 2026-04-05 00:27:12.858752 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:27:12.858763 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:27:12.858773 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:27:12.858784 | orchestrator | changed: [testbed-node-3] 2026-04-05 00:27:12.858795 | orchestrator | changed: [testbed-node-5] 2026-04-05 00:27:12.858806 | orchestrator | changed: [testbed-node-4] 2026-04-05 00:27:12.858817 | orchestrator | changed: [testbed-manager] 2026-04-05 00:27:12.858827 | orchestrator | 2026-04-05 00:27:12.858838 | orchestrator | TASK [osism.commons.proxy : Include distribution specific tasks] *************** 2026-04-05 00:27:12.858849 | orchestrator | Sunday 05 April 2026 00:27:10 +0000 (0:00:01.444) 0:00:09.874 ********** 2026-04-05 00:27:12.858860 | orchestrator | skipping: [testbed-manager] 2026-04-05 00:27:12.858872 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/proxy/tasks/Debian-family.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-05 00:27:12.858885 | orchestrator | 2026-04-05 00:27:12.858896 | orchestrator | TASK [osism.commons.proxy : Configure proxy parameters for apt] **************** 2026-04-05 00:27:12.858906 | orchestrator | Sunday 05 April 2026 00:27:10 +0000 (0:00:00.301) 0:00:10.175 ********** 2026-04-05 00:27:12.858917 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:27:12.858928 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:27:12.858939 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:27:12.858950 | orchestrator | changed: [testbed-node-4] 2026-04-05 00:27:12.858961 | orchestrator | changed: [testbed-node-3] 2026-04-05 00:27:12.858971 | orchestrator | changed: [testbed-node-5] 2026-04-05 00:27:12.858982 | orchestrator | 2026-04-05 00:27:12.858993 | orchestrator | TASK [osism.commons.proxy : Set system wide settings in environment file] ****** 2026-04-05 00:27:12.859004 | orchestrator | Sunday 05 April 2026 00:27:11 +0000 (0:00:01.074) 0:00:11.249 ********** 2026-04-05 00:27:12.859015 | orchestrator | skipping: [testbed-manager] 2026-04-05 00:27:12.859026 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:27:12.859047 | orchestrator | changed: [testbed-node-3] 2026-04-05 00:27:12.859059 | orchestrator | changed: [testbed-node-4] 2026-04-05 00:27:12.859069 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:27:12.859080 | orchestrator | changed: [testbed-node-5] 2026-04-05 00:27:12.859098 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:27:12.859109 | orchestrator | 2026-04-05 00:27:12.859120 | orchestrator | TASK [osism.commons.proxy : Remove system wide settings in environment file] *** 2026-04-05 00:27:12.859130 | orchestrator | Sunday 05 April 2026 00:27:12 +0000 (0:00:00.622) 0:00:11.872 ********** 2026-04-05 00:27:12.859141 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:27:12.859152 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:27:12.859163 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:27:12.859173 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:27:12.859184 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:27:12.859195 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:27:12.859206 | orchestrator | ok: [testbed-manager] 2026-04-05 00:27:12.859216 | orchestrator | 2026-04-05 00:27:12.859227 | orchestrator | TASK [osism.commons.resolvconf : Check minimum and maximum number of name servers] *** 2026-04-05 00:27:12.859239 | orchestrator | Sunday 05 April 2026 00:27:12 +0000 (0:00:00.442) 0:00:12.314 ********** 2026-04-05 00:27:12.859250 | orchestrator | skipping: [testbed-manager] 2026-04-05 00:27:12.859261 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:27:12.859283 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:27:24.924885 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:27:24.924991 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:27:24.925007 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:27:24.925018 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:27:24.925030 | orchestrator | 2026-04-05 00:27:24.925042 | orchestrator | TASK [osism.commons.resolvconf : Include resolvconf tasks] ********************* 2026-04-05 00:27:24.925055 | orchestrator | Sunday 05 April 2026 00:27:12 +0000 (0:00:00.244) 0:00:12.559 ********** 2026-04-05 00:27:24.925068 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-resolv.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-05 00:27:24.925094 | orchestrator | 2026-04-05 00:27:24.925106 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific installation tasks] *** 2026-04-05 00:27:24.925118 | orchestrator | Sunday 05 April 2026 00:27:13 +0000 (0:00:00.319) 0:00:12.879 ********** 2026-04-05 00:27:24.925129 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-05 00:27:24.925141 | orchestrator | 2026-04-05 00:27:24.925152 | orchestrator | TASK [osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf] *** 2026-04-05 00:27:24.925163 | orchestrator | Sunday 05 April 2026 00:27:13 +0000 (0:00:00.343) 0:00:13.222 ********** 2026-04-05 00:27:24.925174 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:27:24.925185 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:27:24.925196 | orchestrator | ok: [testbed-manager] 2026-04-05 00:27:24.925207 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:27:24.925218 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:27:24.925229 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:27:24.925240 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:27:24.925251 | orchestrator | 2026-04-05 00:27:24.925262 | orchestrator | TASK [osism.commons.resolvconf : Install package systemd-resolved] ************* 2026-04-05 00:27:24.925274 | orchestrator | Sunday 05 April 2026 00:27:14 +0000 (0:00:01.206) 0:00:14.428 ********** 2026-04-05 00:27:24.925285 | orchestrator | skipping: [testbed-manager] 2026-04-05 00:27:24.925296 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:27:24.925307 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:27:24.925318 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:27:24.925329 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:27:24.925340 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:27:24.925375 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:27:24.925387 | orchestrator | 2026-04-05 00:27:24.925399 | orchestrator | TASK [osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf] ***** 2026-04-05 00:27:24.925432 | orchestrator | Sunday 05 April 2026 00:27:15 +0000 (0:00:00.218) 0:00:14.647 ********** 2026-04-05 00:27:24.925445 | orchestrator | ok: [testbed-manager] 2026-04-05 00:27:24.925457 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:27:24.925470 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:27:24.925482 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:27:24.925494 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:27:24.925505 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:27:24.925517 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:27:24.925530 | orchestrator | 2026-04-05 00:27:24.925543 | orchestrator | TASK [osism.commons.resolvconf : Archive existing file /etc/resolv.conf] ******* 2026-04-05 00:27:24.925556 | orchestrator | Sunday 05 April 2026 00:27:15 +0000 (0:00:00.551) 0:00:15.199 ********** 2026-04-05 00:27:24.925568 | orchestrator | skipping: [testbed-manager] 2026-04-05 00:27:24.925580 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:27:24.925593 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:27:24.925605 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:27:24.925618 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:27:24.925630 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:27:24.925642 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:27:24.925654 | orchestrator | 2026-04-05 00:27:24.925667 | orchestrator | TASK [osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf] *** 2026-04-05 00:27:24.925681 | orchestrator | Sunday 05 April 2026 00:27:15 +0000 (0:00:00.281) 0:00:15.480 ********** 2026-04-05 00:27:24.925694 | orchestrator | ok: [testbed-manager] 2026-04-05 00:27:24.925714 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:27:24.925727 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:27:24.925739 | orchestrator | changed: [testbed-node-3] 2026-04-05 00:27:24.925752 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:27:24.925763 | orchestrator | changed: [testbed-node-4] 2026-04-05 00:27:24.925774 | orchestrator | changed: [testbed-node-5] 2026-04-05 00:27:24.925785 | orchestrator | 2026-04-05 00:27:24.925796 | orchestrator | TASK [osism.commons.resolvconf : Copy configuration files] ********************* 2026-04-05 00:27:24.925807 | orchestrator | Sunday 05 April 2026 00:27:16 +0000 (0:00:00.529) 0:00:16.010 ********** 2026-04-05 00:27:24.925818 | orchestrator | ok: [testbed-manager] 2026-04-05 00:27:24.925829 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:27:24.925840 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:27:24.925851 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:27:24.925877 | orchestrator | changed: [testbed-node-5] 2026-04-05 00:27:24.925888 | orchestrator | changed: [testbed-node-3] 2026-04-05 00:27:24.925911 | orchestrator | changed: [testbed-node-4] 2026-04-05 00:27:24.925922 | orchestrator | 2026-04-05 00:27:24.925933 | orchestrator | TASK [osism.commons.resolvconf : Start/enable systemd-resolved service] ******** 2026-04-05 00:27:24.925944 | orchestrator | Sunday 05 April 2026 00:27:17 +0000 (0:00:01.108) 0:00:17.119 ********** 2026-04-05 00:27:24.925955 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:27:24.925966 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:27:24.925977 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:27:24.925988 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:27:24.925999 | orchestrator | ok: [testbed-manager] 2026-04-05 00:27:24.926010 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:27:24.926188 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:27:24.926201 | orchestrator | 2026-04-05 00:27:24.926213 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific configuration tasks] *** 2026-04-05 00:27:24.926224 | orchestrator | Sunday 05 April 2026 00:27:18 +0000 (0:00:01.098) 0:00:18.217 ********** 2026-04-05 00:27:24.926255 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-05 00:27:24.926268 | orchestrator | 2026-04-05 00:27:24.926279 | orchestrator | TASK [osism.commons.resolvconf : Restart systemd-resolved service] ************* 2026-04-05 00:27:24.926300 | orchestrator | Sunday 05 April 2026 00:27:19 +0000 (0:00:00.372) 0:00:18.590 ********** 2026-04-05 00:27:24.926311 | orchestrator | skipping: [testbed-manager] 2026-04-05 00:27:24.926322 | orchestrator | changed: [testbed-node-5] 2026-04-05 00:27:24.926333 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:27:24.926370 | orchestrator | changed: [testbed-node-4] 2026-04-05 00:27:24.926382 | orchestrator | changed: [testbed-node-3] 2026-04-05 00:27:24.926406 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:27:24.926417 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:27:24.926428 | orchestrator | 2026-04-05 00:27:24.926450 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2026-04-05 00:27:24.926461 | orchestrator | Sunday 05 April 2026 00:27:20 +0000 (0:00:01.302) 0:00:19.892 ********** 2026-04-05 00:27:24.926472 | orchestrator | ok: [testbed-manager] 2026-04-05 00:27:24.926483 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:27:24.926494 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:27:24.926505 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:27:24.926516 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:27:24.926527 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:27:24.926537 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:27:24.926548 | orchestrator | 2026-04-05 00:27:24.926559 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2026-04-05 00:27:24.926571 | orchestrator | Sunday 05 April 2026 00:27:20 +0000 (0:00:00.237) 0:00:20.130 ********** 2026-04-05 00:27:24.926582 | orchestrator | ok: [testbed-manager] 2026-04-05 00:27:24.926593 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:27:24.926604 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:27:24.926614 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:27:24.926625 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:27:24.926636 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:27:24.926647 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:27:24.926658 | orchestrator | 2026-04-05 00:27:24.926668 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2026-04-05 00:27:24.926680 | orchestrator | Sunday 05 April 2026 00:27:20 +0000 (0:00:00.275) 0:00:20.405 ********** 2026-04-05 00:27:24.926691 | orchestrator | ok: [testbed-manager] 2026-04-05 00:27:24.926701 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:27:24.926712 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:27:24.926723 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:27:24.926734 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:27:24.926745 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:27:24.926756 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:27:24.926767 | orchestrator | 2026-04-05 00:27:24.926778 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2026-04-05 00:27:24.926789 | orchestrator | Sunday 05 April 2026 00:27:21 +0000 (0:00:00.269) 0:00:20.675 ********** 2026-04-05 00:27:24.926802 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-05 00:27:24.926815 | orchestrator | 2026-04-05 00:27:24.926826 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2026-04-05 00:27:24.926837 | orchestrator | Sunday 05 April 2026 00:27:21 +0000 (0:00:00.350) 0:00:21.025 ********** 2026-04-05 00:27:24.926848 | orchestrator | ok: [testbed-manager] 2026-04-05 00:27:24.926859 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:27:24.926870 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:27:24.926881 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:27:24.926892 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:27:24.926903 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:27:24.926913 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:27:24.926924 | orchestrator | 2026-04-05 00:27:24.926935 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2026-04-05 00:27:24.926947 | orchestrator | Sunday 05 April 2026 00:27:22 +0000 (0:00:00.583) 0:00:21.608 ********** 2026-04-05 00:27:24.926958 | orchestrator | skipping: [testbed-manager] 2026-04-05 00:27:24.926975 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:27:24.926987 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:27:24.926998 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:27:24.927009 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:27:24.927020 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:27:24.927031 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:27:24.927042 | orchestrator | 2026-04-05 00:27:24.927053 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2026-04-05 00:27:24.927064 | orchestrator | Sunday 05 April 2026 00:27:22 +0000 (0:00:00.304) 0:00:21.913 ********** 2026-04-05 00:27:24.927075 | orchestrator | ok: [testbed-manager] 2026-04-05 00:27:24.927087 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:27:24.927098 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:27:24.927109 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:27:24.927120 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:27:24.927131 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:27:24.927142 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:27:24.927153 | orchestrator | 2026-04-05 00:27:24.927164 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2026-04-05 00:27:24.927175 | orchestrator | Sunday 05 April 2026 00:27:23 +0000 (0:00:01.007) 0:00:22.921 ********** 2026-04-05 00:27:24.927186 | orchestrator | ok: [testbed-manager] 2026-04-05 00:27:24.927197 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:27:24.927208 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:27:24.927219 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:27:24.927230 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:27:24.927241 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:27:24.927252 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:27:24.927263 | orchestrator | 2026-04-05 00:27:24.927274 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2026-04-05 00:27:24.927285 | orchestrator | Sunday 05 April 2026 00:27:23 +0000 (0:00:00.605) 0:00:23.527 ********** 2026-04-05 00:27:24.927297 | orchestrator | ok: [testbed-manager] 2026-04-05 00:27:24.927308 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:27:24.927318 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:27:24.927329 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:27:24.927398 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:28:04.261888 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:28:04.261979 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:28:04.261994 | orchestrator | 2026-04-05 00:28:04.262005 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2026-04-05 00:28:04.262060 | orchestrator | Sunday 05 April 2026 00:27:25 +0000 (0:00:01.070) 0:00:24.598 ********** 2026-04-05 00:28:04.262072 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:28:04.262081 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:28:04.262090 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:28:04.262099 | orchestrator | changed: [testbed-manager] 2026-04-05 00:28:04.262108 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:28:04.262117 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:28:04.262126 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:28:04.262135 | orchestrator | 2026-04-05 00:28:04.262145 | orchestrator | TASK [osism.services.rsyslog : Gather variables for each operating system] ***** 2026-04-05 00:28:04.262155 | orchestrator | Sunday 05 April 2026 00:27:40 +0000 (0:00:15.857) 0:00:40.455 ********** 2026-04-05 00:28:04.262164 | orchestrator | ok: [testbed-manager] 2026-04-05 00:28:04.262173 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:28:04.262182 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:28:04.262191 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:28:04.262200 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:28:04.262209 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:28:04.262218 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:28:04.262227 | orchestrator | 2026-04-05 00:28:04.262237 | orchestrator | TASK [osism.services.rsyslog : Set rsyslog_user variable to default value] ***** 2026-04-05 00:28:04.262246 | orchestrator | Sunday 05 April 2026 00:27:41 +0000 (0:00:00.236) 0:00:40.692 ********** 2026-04-05 00:28:04.262274 | orchestrator | ok: [testbed-manager] 2026-04-05 00:28:04.262284 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:28:04.262293 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:28:04.262351 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:28:04.262360 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:28:04.262369 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:28:04.262384 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:28:04.262398 | orchestrator | 2026-04-05 00:28:04.262413 | orchestrator | TASK [osism.services.rsyslog : Set rsyslog_workdir variable to default value] *** 2026-04-05 00:28:04.262427 | orchestrator | Sunday 05 April 2026 00:27:41 +0000 (0:00:00.237) 0:00:40.929 ********** 2026-04-05 00:28:04.262442 | orchestrator | ok: [testbed-manager] 2026-04-05 00:28:04.262456 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:28:04.262471 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:28:04.262486 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:28:04.262501 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:28:04.262517 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:28:04.262527 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:28:04.262538 | orchestrator | 2026-04-05 00:28:04.262548 | orchestrator | TASK [osism.services.rsyslog : Include distribution specific install tasks] **** 2026-04-05 00:28:04.262558 | orchestrator | Sunday 05 April 2026 00:27:41 +0000 (0:00:00.249) 0:00:41.179 ********** 2026-04-05 00:28:04.262569 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-05 00:28:04.262582 | orchestrator | 2026-04-05 00:28:04.262593 | orchestrator | TASK [osism.services.rsyslog : Install rsyslog package] ************************ 2026-04-05 00:28:04.262603 | orchestrator | Sunday 05 April 2026 00:27:41 +0000 (0:00:00.292) 0:00:41.472 ********** 2026-04-05 00:28:04.262613 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:28:04.262623 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:28:04.262633 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:28:04.262643 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:28:04.262653 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:28:04.262662 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:28:04.262674 | orchestrator | ok: [testbed-manager] 2026-04-05 00:28:04.262683 | orchestrator | 2026-04-05 00:28:04.262694 | orchestrator | TASK [osism.services.rsyslog : Copy rsyslog.conf configuration file] *********** 2026-04-05 00:28:04.262704 | orchestrator | Sunday 05 April 2026 00:27:43 +0000 (0:00:01.556) 0:00:43.028 ********** 2026-04-05 00:28:04.262714 | orchestrator | changed: [testbed-manager] 2026-04-05 00:28:04.262736 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:28:04.262747 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:28:04.262757 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:28:04.262768 | orchestrator | changed: [testbed-node-4] 2026-04-05 00:28:04.262782 | orchestrator | changed: [testbed-node-3] 2026-04-05 00:28:04.262792 | orchestrator | changed: [testbed-node-5] 2026-04-05 00:28:04.262803 | orchestrator | 2026-04-05 00:28:04.262814 | orchestrator | TASK [osism.services.rsyslog : Manage rsyslog service] ************************* 2026-04-05 00:28:04.262823 | orchestrator | Sunday 05 April 2026 00:27:44 +0000 (0:00:00.945) 0:00:43.973 ********** 2026-04-05 00:28:04.262831 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:28:04.262840 | orchestrator | ok: [testbed-manager] 2026-04-05 00:28:04.262848 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:28:04.262857 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:28:04.262865 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:28:04.262874 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:28:04.262882 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:28:04.262891 | orchestrator | 2026-04-05 00:28:04.262899 | orchestrator | TASK [osism.services.rsyslog : Include fluentd tasks] ************************** 2026-04-05 00:28:04.262908 | orchestrator | Sunday 05 April 2026 00:27:45 +0000 (0:00:00.710) 0:00:44.684 ********** 2026-04-05 00:28:04.262917 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/fluentd.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-05 00:28:04.262936 | orchestrator | 2026-04-05 00:28:04.262945 | orchestrator | TASK [osism.services.rsyslog : Forward syslog message to local fluentd daemon] *** 2026-04-05 00:28:04.262954 | orchestrator | Sunday 05 April 2026 00:27:45 +0000 (0:00:00.239) 0:00:44.924 ********** 2026-04-05 00:28:04.262962 | orchestrator | changed: [testbed-manager] 2026-04-05 00:28:04.262971 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:28:04.262979 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:28:04.262988 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:28:04.262997 | orchestrator | changed: [testbed-node-3] 2026-04-05 00:28:04.263005 | orchestrator | changed: [testbed-node-4] 2026-04-05 00:28:04.263014 | orchestrator | changed: [testbed-node-5] 2026-04-05 00:28:04.263023 | orchestrator | 2026-04-05 00:28:04.263045 | orchestrator | TASK [osism.services.rsyslog : Include additional log server tasks] ************ 2026-04-05 00:28:04.263055 | orchestrator | Sunday 05 April 2026 00:27:46 +0000 (0:00:00.884) 0:00:45.809 ********** 2026-04-05 00:28:04.263063 | orchestrator | skipping: [testbed-manager] 2026-04-05 00:28:04.263072 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:28:04.263081 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:28:04.263090 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:28:04.263098 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:28:04.263107 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:28:04.263115 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:28:04.263124 | orchestrator | 2026-04-05 00:28:04.263133 | orchestrator | TASK [osism.services.rsyslog : Include logrotate tasks] ************************ 2026-04-05 00:28:04.263142 | orchestrator | Sunday 05 April 2026 00:27:46 +0000 (0:00:00.215) 0:00:46.024 ********** 2026-04-05 00:28:04.263151 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/logrotate.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-05 00:28:04.263160 | orchestrator | 2026-04-05 00:28:04.263169 | orchestrator | TASK [osism.services.rsyslog : Ensure logrotate package is installed] ********** 2026-04-05 00:28:04.263177 | orchestrator | Sunday 05 April 2026 00:27:46 +0000 (0:00:00.264) 0:00:46.289 ********** 2026-04-05 00:28:04.263186 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:28:04.263194 | orchestrator | ok: [testbed-manager] 2026-04-05 00:28:04.263203 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:28:04.263211 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:28:04.263220 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:28:04.263229 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:28:04.263237 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:28:04.263246 | orchestrator | 2026-04-05 00:28:04.263254 | orchestrator | TASK [osism.services.rsyslog : Configure logrotate for rsyslog] **************** 2026-04-05 00:28:04.263263 | orchestrator | Sunday 05 April 2026 00:27:48 +0000 (0:00:01.483) 0:00:47.772 ********** 2026-04-05 00:28:04.263272 | orchestrator | changed: [testbed-manager] 2026-04-05 00:28:04.263280 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:28:04.263289 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:28:04.263329 | orchestrator | changed: [testbed-node-3] 2026-04-05 00:28:04.263346 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:28:04.263361 | orchestrator | changed: [testbed-node-4] 2026-04-05 00:28:04.263376 | orchestrator | changed: [testbed-node-5] 2026-04-05 00:28:04.263386 | orchestrator | 2026-04-05 00:28:04.263397 | orchestrator | TASK [osism.commons.systohc : Install util-linux-extra package] **************** 2026-04-05 00:28:04.263408 | orchestrator | Sunday 05 April 2026 00:27:49 +0000 (0:00:01.109) 0:00:48.882 ********** 2026-04-05 00:28:04.263419 | orchestrator | changed: [testbed-node-4] 2026-04-05 00:28:04.263430 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:28:04.263440 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:28:04.263451 | orchestrator | changed: [testbed-node-5] 2026-04-05 00:28:04.263462 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:28:04.263472 | orchestrator | changed: [testbed-node-3] 2026-04-05 00:28:04.263491 | orchestrator | changed: [testbed-manager] 2026-04-05 00:28:04.263502 | orchestrator | 2026-04-05 00:28:04.263513 | orchestrator | TASK [osism.commons.systohc : Sync hardware clock] ***************************** 2026-04-05 00:28:04.263524 | orchestrator | Sunday 05 April 2026 00:28:01 +0000 (0:00:12.176) 0:01:01.059 ********** 2026-04-05 00:28:04.263534 | orchestrator | ok: [testbed-manager] 2026-04-05 00:28:04.263545 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:28:04.263556 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:28:04.263566 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:28:04.263577 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:28:04.263587 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:28:04.263598 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:28:04.263609 | orchestrator | 2026-04-05 00:28:04.263619 | orchestrator | TASK [osism.commons.configfs : Start sys-kernel-config mount] ****************** 2026-04-05 00:28:04.263630 | orchestrator | Sunday 05 April 2026 00:28:02 +0000 (0:00:01.188) 0:01:02.247 ********** 2026-04-05 00:28:04.263641 | orchestrator | ok: [testbed-manager] 2026-04-05 00:28:04.263651 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:28:04.263662 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:28:04.263673 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:28:04.263683 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:28:04.263694 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:28:04.263710 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:28:04.263721 | orchestrator | 2026-04-05 00:28:04.263732 | orchestrator | TASK [osism.commons.packages : Gather variables for each operating system] ***** 2026-04-05 00:28:04.263743 | orchestrator | Sunday 05 April 2026 00:28:03 +0000 (0:00:00.906) 0:01:03.154 ********** 2026-04-05 00:28:04.263753 | orchestrator | ok: [testbed-manager] 2026-04-05 00:28:04.263764 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:28:04.263775 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:28:04.263785 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:28:04.263796 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:28:04.263806 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:28:04.263817 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:28:04.263827 | orchestrator | 2026-04-05 00:28:04.263838 | orchestrator | TASK [osism.commons.packages : Set required_packages_distribution variable to default value] *** 2026-04-05 00:28:04.263849 | orchestrator | Sunday 05 April 2026 00:28:03 +0000 (0:00:00.220) 0:01:03.375 ********** 2026-04-05 00:28:04.263859 | orchestrator | ok: [testbed-manager] 2026-04-05 00:28:04.263870 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:28:04.263880 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:28:04.263891 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:28:04.263902 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:28:04.263912 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:28:04.263923 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:28:04.263934 | orchestrator | 2026-04-05 00:28:04.263945 | orchestrator | TASK [osism.commons.packages : Include distribution specific package tasks] **** 2026-04-05 00:28:04.263955 | orchestrator | Sunday 05 April 2026 00:28:03 +0000 (0:00:00.179) 0:01:03.554 ********** 2026-04-05 00:28:04.263967 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/packages/tasks/package-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-05 00:28:04.263978 | orchestrator | 2026-04-05 00:28:04.263997 | orchestrator | TASK [osism.commons.packages : Install needrestart package] ******************** 2026-04-05 00:30:28.800046 | orchestrator | Sunday 05 April 2026 00:28:04 +0000 (0:00:00.287) 0:01:03.841 ********** 2026-04-05 00:30:28.800155 | orchestrator | ok: [testbed-manager] 2026-04-05 00:30:28.800171 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:30:28.800182 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:30:28.800193 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:30:28.800203 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:30:28.800212 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:30:28.800222 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:30:28.800232 | orchestrator | 2026-04-05 00:30:28.800243 | orchestrator | TASK [osism.commons.packages : Set needrestart mode] *************************** 2026-04-05 00:30:28.800275 | orchestrator | Sunday 05 April 2026 00:28:06 +0000 (0:00:01.837) 0:01:05.679 ********** 2026-04-05 00:30:28.800286 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:30:28.800297 | orchestrator | changed: [testbed-node-3] 2026-04-05 00:30:28.800321 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:30:28.800331 | orchestrator | changed: [testbed-manager] 2026-04-05 00:30:28.800341 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:30:28.800351 | orchestrator | changed: [testbed-node-4] 2026-04-05 00:30:28.800361 | orchestrator | changed: [testbed-node-5] 2026-04-05 00:30:28.800370 | orchestrator | 2026-04-05 00:30:28.800381 | orchestrator | TASK [osism.commons.packages : Set apt_cache_valid_time variable to default value] *** 2026-04-05 00:30:28.800392 | orchestrator | Sunday 05 April 2026 00:28:06 +0000 (0:00:00.592) 0:01:06.272 ********** 2026-04-05 00:30:28.800457 | orchestrator | ok: [testbed-manager] 2026-04-05 00:30:28.800469 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:30:28.800488 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:30:28.800498 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:30:28.800508 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:30:28.800518 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:30:28.800527 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:30:28.800537 | orchestrator | 2026-04-05 00:30:28.800547 | orchestrator | TASK [osism.commons.packages : Update package cache] *************************** 2026-04-05 00:30:28.800557 | orchestrator | Sunday 05 April 2026 00:28:06 +0000 (0:00:00.236) 0:01:06.508 ********** 2026-04-05 00:30:28.800567 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:30:28.800577 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:30:28.800586 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:30:28.800596 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:30:28.800605 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:30:28.800615 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:30:28.800624 | orchestrator | ok: [testbed-manager] 2026-04-05 00:30:28.800634 | orchestrator | 2026-04-05 00:30:28.800644 | orchestrator | TASK [osism.commons.packages : Download upgrade packages] ********************** 2026-04-05 00:30:28.800654 | orchestrator | Sunday 05 April 2026 00:28:08 +0000 (0:00:01.234) 0:01:07.742 ********** 2026-04-05 00:30:28.800673 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:30:28.800684 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:30:28.800693 | orchestrator | changed: [testbed-node-4] 2026-04-05 00:30:28.800703 | orchestrator | changed: [testbed-node-3] 2026-04-05 00:30:28.800713 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:30:28.800722 | orchestrator | changed: [testbed-node-5] 2026-04-05 00:30:28.800732 | orchestrator | changed: [testbed-manager] 2026-04-05 00:30:28.800742 | orchestrator | 2026-04-05 00:30:28.800752 | orchestrator | TASK [osism.commons.packages : Upgrade packages] ******************************* 2026-04-05 00:30:28.800761 | orchestrator | Sunday 05 April 2026 00:28:09 +0000 (0:00:01.661) 0:01:09.404 ********** 2026-04-05 00:30:28.800771 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:30:28.800781 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:30:28.800791 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:30:28.800801 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:30:28.800810 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:30:28.800820 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:30:28.800830 | orchestrator | ok: [testbed-manager] 2026-04-05 00:30:28.800840 | orchestrator | 2026-04-05 00:30:28.800849 | orchestrator | TASK [osism.commons.packages : Download required packages] ********************* 2026-04-05 00:30:28.800859 | orchestrator | Sunday 05 April 2026 00:28:11 +0000 (0:00:02.031) 0:01:11.435 ********** 2026-04-05 00:30:28.800879 | orchestrator | ok: [testbed-manager] 2026-04-05 00:30:28.800889 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:30:28.800899 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:30:28.800908 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:30:28.800918 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:30:28.800927 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:30:28.800937 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:30:28.800955 | orchestrator | 2026-04-05 00:30:28.800965 | orchestrator | TASK [osism.commons.packages : Install required packages] ********************** 2026-04-05 00:30:28.800999 | orchestrator | Sunday 05 April 2026 00:28:51 +0000 (0:00:39.988) 0:01:51.423 ********** 2026-04-05 00:30:28.801009 | orchestrator | changed: [testbed-manager] 2026-04-05 00:30:28.801019 | orchestrator | changed: [testbed-node-4] 2026-04-05 00:30:28.801029 | orchestrator | changed: [testbed-node-3] 2026-04-05 00:30:28.801039 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:30:28.801048 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:30:28.801058 | orchestrator | changed: [testbed-node-5] 2026-04-05 00:30:28.801082 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:30:28.801100 | orchestrator | 2026-04-05 00:30:28.801116 | orchestrator | TASK [osism.commons.packages : Remove useless packages from the cache] ********* 2026-04-05 00:30:28.801132 | orchestrator | Sunday 05 April 2026 00:30:11 +0000 (0:01:19.805) 0:03:11.229 ********** 2026-04-05 00:30:28.801149 | orchestrator | ok: [testbed-manager] 2026-04-05 00:30:28.801162 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:30:28.801172 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:30:28.801182 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:30:28.801191 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:30:28.801201 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:30:28.801210 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:30:28.801235 | orchestrator | 2026-04-05 00:30:28.801246 | orchestrator | TASK [osism.commons.packages : Remove dependencies that are no longer required] *** 2026-04-05 00:30:28.801256 | orchestrator | Sunday 05 April 2026 00:30:13 +0000 (0:00:02.054) 0:03:13.284 ********** 2026-04-05 00:30:28.801265 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:30:28.801275 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:30:28.801284 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:30:28.801294 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:30:28.801303 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:30:28.801313 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:30:28.801323 | orchestrator | changed: [testbed-manager] 2026-04-05 00:30:28.801332 | orchestrator | 2026-04-05 00:30:28.801342 | orchestrator | TASK [osism.commons.sysctl : Include sysctl tasks] ***************************** 2026-04-05 00:30:28.801352 | orchestrator | Sunday 05 April 2026 00:30:27 +0000 (0:00:13.924) 0:03:27.208 ********** 2026-04-05 00:30:28.801391 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 => (item={'key': 'elasticsearch', 'value': [{'name': 'vm.max_map_count', 'value': 262144}]}) 2026-04-05 00:30:28.801436 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 => (item={'key': 'rabbitmq', 'value': [{'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}, {'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}, {'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}, {'name': 'net.core.wmem_max', 'value': 16777216}, {'name': 'net.core.rmem_max', 'value': 16777216}, {'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}, {'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}, {'name': 'net.core.somaxconn', 'value': 4096}, {'name': 'net.ipv4.tcp_syncookies', 'value': 0}, {'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}]}) 2026-04-05 00:30:28.801450 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 => (item={'key': 'generic', 'value': [{'name': 'vm.swappiness', 'value': 1}]}) 2026-04-05 00:30:28.801462 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 => (item={'key': 'compute', 'value': [{'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}]}) 2026-04-05 00:30:28.801481 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 => (item={'key': 'network', 'value': [{'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}]}) 2026-04-05 00:30:28.801495 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 => (item={'key': 'k3s_node', 'value': [{'name': 'fs.inotify.max_user_instances', 'value': 1024}]}) 2026-04-05 00:30:28.801505 | orchestrator | 2026-04-05 00:30:28.801515 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on elasticsearch] *********** 2026-04-05 00:30:28.801525 | orchestrator | Sunday 05 April 2026 00:30:28 +0000 (0:00:00.427) 0:03:27.635 ********** 2026-04-05 00:30:28.801535 | orchestrator | skipping: [testbed-manager] => (item={'name': 'vm.max_map_count', 'value': 262144})  2026-04-05 00:30:28.801544 | orchestrator | skipping: [testbed-manager] 2026-04-05 00:30:28.801554 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'vm.max_map_count', 'value': 262144})  2026-04-05 00:30:28.801564 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'vm.max_map_count', 'value': 262144})  2026-04-05 00:30:28.801574 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:30:28.801583 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:30:28.801593 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'vm.max_map_count', 'value': 262144})  2026-04-05 00:30:28.801603 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:30:28.801624 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-04-05 00:30:28.801634 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-04-05 00:30:28.801655 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-04-05 00:30:28.801665 | orchestrator | 2026-04-05 00:30:28.801675 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on rabbitmq] **************** 2026-04-05 00:30:28.801685 | orchestrator | Sunday 05 April 2026 00:30:28 +0000 (0:00:00.673) 0:03:28.309 ********** 2026-04-05 00:30:28.801720 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2026-04-05 00:30:28.801732 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2026-04-05 00:30:28.801742 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2026-04-05 00:30:28.801751 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2026-04-05 00:30:28.801761 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2026-04-05 00:30:28.801777 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2026-04-05 00:30:35.947851 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2026-04-05 00:30:35.947936 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2026-04-05 00:30:35.947944 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2026-04-05 00:30:35.947949 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2026-04-05 00:30:35.947955 | orchestrator | skipping: [testbed-manager] 2026-04-05 00:30:35.947962 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2026-04-05 00:30:35.947967 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2026-04-05 00:30:35.947971 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2026-04-05 00:30:35.947990 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2026-04-05 00:30:35.947995 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2026-04-05 00:30:35.947999 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2026-04-05 00:30:35.948004 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2026-04-05 00:30:35.948009 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2026-04-05 00:30:35.948014 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2026-04-05 00:30:35.948018 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2026-04-05 00:30:35.948023 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2026-04-05 00:30:35.948027 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2026-04-05 00:30:35.948032 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2026-04-05 00:30:35.948097 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2026-04-05 00:30:35.948103 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2026-04-05 00:30:35.948108 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:30:35.948113 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2026-04-05 00:30:35.948117 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2026-04-05 00:30:35.948122 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2026-04-05 00:30:35.948127 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2026-04-05 00:30:35.948131 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2026-04-05 00:30:35.948136 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:30:35.948141 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2026-04-05 00:30:35.948145 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2026-04-05 00:30:35.948160 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2026-04-05 00:30:35.948165 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2026-04-05 00:30:35.948170 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2026-04-05 00:30:35.948174 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2026-04-05 00:30:35.948179 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2026-04-05 00:30:35.948183 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2026-04-05 00:30:35.948188 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2026-04-05 00:30:35.948193 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2026-04-05 00:30:35.948197 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:30:35.948202 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2026-04-05 00:30:35.948207 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2026-04-05 00:30:35.948212 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2026-04-05 00:30:35.948224 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2026-04-05 00:30:35.948229 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2026-04-05 00:30:35.948245 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2026-04-05 00:30:35.948250 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2026-04-05 00:30:35.948254 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2026-04-05 00:30:35.948259 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2026-04-05 00:30:35.948264 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2026-04-05 00:30:35.948268 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2026-04-05 00:30:35.948273 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2026-04-05 00:30:35.948277 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2026-04-05 00:30:35.948282 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2026-04-05 00:30:35.948286 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2026-04-05 00:30:35.948291 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2026-04-05 00:30:35.948296 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2026-04-05 00:30:35.948300 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2026-04-05 00:30:35.948305 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2026-04-05 00:30:35.948309 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2026-04-05 00:30:35.948314 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2026-04-05 00:30:35.948318 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2026-04-05 00:30:35.948323 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2026-04-05 00:30:35.948327 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2026-04-05 00:30:35.948332 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2026-04-05 00:30:35.948337 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2026-04-05 00:30:35.948341 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2026-04-05 00:30:35.948346 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2026-04-05 00:30:35.948350 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2026-04-05 00:30:35.948355 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2026-04-05 00:30:35.948360 | orchestrator | 2026-04-05 00:30:35.948365 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on generic] ***************** 2026-04-05 00:30:35.948369 | orchestrator | Sunday 05 April 2026 00:30:33 +0000 (0:00:05.098) 0:03:33.408 ********** 2026-04-05 00:30:35.948374 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-04-05 00:30:35.948378 | orchestrator | changed: [testbed-manager] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-04-05 00:30:35.948386 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-04-05 00:30:35.948391 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-04-05 00:30:35.948401 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-04-05 00:30:35.948407 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-04-05 00:30:35.948412 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-04-05 00:30:35.948417 | orchestrator | 2026-04-05 00:30:35.948464 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on compute] ***************** 2026-04-05 00:30:35.948470 | orchestrator | Sunday 05 April 2026 00:30:35 +0000 (0:00:01.534) 0:03:34.943 ********** 2026-04-05 00:30:35.948475 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-04-05 00:30:35.948481 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-04-05 00:30:35.948486 | orchestrator | skipping: [testbed-manager] 2026-04-05 00:30:35.948492 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-04-05 00:30:35.948497 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:30:35.948502 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-04-05 00:30:35.948508 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:30:35.948513 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:30:35.948518 | orchestrator | changed: [testbed-node-3] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-04-05 00:30:35.948524 | orchestrator | changed: [testbed-node-4] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-04-05 00:30:35.948533 | orchestrator | changed: [testbed-node-5] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-04-05 00:30:48.820379 | orchestrator | 2026-04-05 00:30:48.820521 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on network] ***************** 2026-04-05 00:30:48.820534 | orchestrator | Sunday 05 April 2026 00:30:35 +0000 (0:00:00.618) 0:03:35.561 ********** 2026-04-05 00:30:48.820539 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-04-05 00:30:48.820546 | orchestrator | skipping: [testbed-manager] 2026-04-05 00:30:48.820552 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-04-05 00:30:48.820557 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:30:48.820563 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-04-05 00:30:48.820567 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:30:48.820572 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-04-05 00:30:48.820577 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:30:48.820582 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-04-05 00:30:48.820587 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-04-05 00:30:48.820592 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-04-05 00:30:48.820596 | orchestrator | 2026-04-05 00:30:48.820601 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on k3s_node] **************** 2026-04-05 00:30:48.820606 | orchestrator | Sunday 05 April 2026 00:30:36 +0000 (0:00:00.561) 0:03:36.122 ********** 2026-04-05 00:30:48.820611 | orchestrator | skipping: [testbed-manager] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2026-04-05 00:30:48.820615 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2026-04-05 00:30:48.820620 | orchestrator | skipping: [testbed-manager] 2026-04-05 00:30:48.820625 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:30:48.820629 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2026-04-05 00:30:48.820651 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2026-04-05 00:30:48.820656 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:30:48.820661 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:30:48.820666 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2026-04-05 00:30:48.820671 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2026-04-05 00:30:48.820675 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2026-04-05 00:30:48.820680 | orchestrator | 2026-04-05 00:30:48.820684 | orchestrator | TASK [osism.commons.limits : Include limits tasks] ***************************** 2026-04-05 00:30:48.820689 | orchestrator | Sunday 05 April 2026 00:30:37 +0000 (0:00:00.735) 0:03:36.858 ********** 2026-04-05 00:30:48.820694 | orchestrator | skipping: [testbed-manager] 2026-04-05 00:30:48.820699 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:30:48.820703 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:30:48.820708 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:30:48.820712 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:30:48.820717 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:30:48.820722 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:30:48.820726 | orchestrator | 2026-04-05 00:30:48.820731 | orchestrator | TASK [osism.commons.services : Populate service facts] ************************* 2026-04-05 00:30:48.820736 | orchestrator | Sunday 05 April 2026 00:30:37 +0000 (0:00:00.292) 0:03:37.150 ********** 2026-04-05 00:30:48.820741 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:30:48.820747 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:30:48.820751 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:30:48.820756 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:30:48.820761 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:30:48.820765 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:30:48.820770 | orchestrator | ok: [testbed-manager] 2026-04-05 00:30:48.820775 | orchestrator | 2026-04-05 00:30:48.820779 | orchestrator | TASK [osism.commons.services : Check services] ********************************* 2026-04-05 00:30:48.820784 | orchestrator | Sunday 05 April 2026 00:30:42 +0000 (0:00:05.322) 0:03:42.473 ********** 2026-04-05 00:30:48.820789 | orchestrator | skipping: [testbed-manager] => (item=nscd)  2026-04-05 00:30:48.820794 | orchestrator | skipping: [testbed-manager] 2026-04-05 00:30:48.820798 | orchestrator | skipping: [testbed-node-0] => (item=nscd)  2026-04-05 00:30:48.820803 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:30:48.820808 | orchestrator | skipping: [testbed-node-1] => (item=nscd)  2026-04-05 00:30:48.820812 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:30:48.820817 | orchestrator | skipping: [testbed-node-2] => (item=nscd)  2026-04-05 00:30:48.820822 | orchestrator | skipping: [testbed-node-3] => (item=nscd)  2026-04-05 00:30:48.820826 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:30:48.820831 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:30:48.820835 | orchestrator | skipping: [testbed-node-4] => (item=nscd)  2026-04-05 00:30:48.820840 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:30:48.820845 | orchestrator | skipping: [testbed-node-5] => (item=nscd)  2026-04-05 00:30:48.820849 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:30:48.820854 | orchestrator | 2026-04-05 00:30:48.820858 | orchestrator | TASK [osism.commons.services : Start/enable required services] ***************** 2026-04-05 00:30:48.820863 | orchestrator | Sunday 05 April 2026 00:30:43 +0000 (0:00:00.334) 0:03:42.808 ********** 2026-04-05 00:30:48.820868 | orchestrator | ok: [testbed-manager] => (item=cron) 2026-04-05 00:30:48.820873 | orchestrator | ok: [testbed-node-0] => (item=cron) 2026-04-05 00:30:48.820878 | orchestrator | ok: [testbed-node-1] => (item=cron) 2026-04-05 00:30:48.820894 | orchestrator | ok: [testbed-node-2] => (item=cron) 2026-04-05 00:30:48.820899 | orchestrator | ok: [testbed-node-3] => (item=cron) 2026-04-05 00:30:48.820903 | orchestrator | ok: [testbed-node-4] => (item=cron) 2026-04-05 00:30:48.820912 | orchestrator | ok: [testbed-node-5] => (item=cron) 2026-04-05 00:30:48.820916 | orchestrator | 2026-04-05 00:30:48.820922 | orchestrator | TASK [osism.commons.motd : Include distribution specific configure tasks] ****** 2026-04-05 00:30:48.820927 | orchestrator | Sunday 05 April 2026 00:30:44 +0000 (0:00:01.176) 0:03:43.984 ********** 2026-04-05 00:30:48.820934 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/motd/tasks/configure-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-05 00:30:48.820941 | orchestrator | 2026-04-05 00:30:48.820947 | orchestrator | TASK [osism.commons.motd : Remove update-motd package] ************************* 2026-04-05 00:30:48.820952 | orchestrator | Sunday 05 April 2026 00:30:44 +0000 (0:00:00.436) 0:03:44.421 ********** 2026-04-05 00:30:48.820957 | orchestrator | ok: [testbed-manager] 2026-04-05 00:30:48.820963 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:30:48.820968 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:30:48.820973 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:30:48.820978 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:30:48.820984 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:30:48.820989 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:30:48.820995 | orchestrator | 2026-04-05 00:30:48.821000 | orchestrator | TASK [osism.commons.motd : Check if /etc/default/motd-news exists] ************* 2026-04-05 00:30:48.821005 | orchestrator | Sunday 05 April 2026 00:30:46 +0000 (0:00:01.444) 0:03:45.865 ********** 2026-04-05 00:30:48.821010 | orchestrator | ok: [testbed-manager] 2026-04-05 00:30:48.821015 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:30:48.821021 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:30:48.821026 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:30:48.821031 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:30:48.821037 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:30:48.821042 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:30:48.821047 | orchestrator | 2026-04-05 00:30:48.821053 | orchestrator | TASK [osism.commons.motd : Disable the dynamic motd-news service] ************** 2026-04-05 00:30:48.821058 | orchestrator | Sunday 05 April 2026 00:30:46 +0000 (0:00:00.621) 0:03:46.486 ********** 2026-04-05 00:30:48.821063 | orchestrator | changed: [testbed-manager] 2026-04-05 00:30:48.821069 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:30:48.821074 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:30:48.821079 | orchestrator | changed: [testbed-node-3] 2026-04-05 00:30:48.821085 | orchestrator | changed: [testbed-node-4] 2026-04-05 00:30:48.821090 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:30:48.821095 | orchestrator | changed: [testbed-node-5] 2026-04-05 00:30:48.821100 | orchestrator | 2026-04-05 00:30:48.821105 | orchestrator | TASK [osism.commons.motd : Get all configuration files in /etc/pam.d] ********** 2026-04-05 00:30:48.821111 | orchestrator | Sunday 05 April 2026 00:30:47 +0000 (0:00:00.650) 0:03:47.137 ********** 2026-04-05 00:30:48.821116 | orchestrator | ok: [testbed-manager] 2026-04-05 00:30:48.821135 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:30:48.821140 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:30:48.821146 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:30:48.821151 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:30:48.821156 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:30:48.821162 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:30:48.821167 | orchestrator | 2026-04-05 00:30:48.821173 | orchestrator | TASK [osism.commons.motd : Remove pam_motd.so rule] **************************** 2026-04-05 00:30:48.821178 | orchestrator | Sunday 05 April 2026 00:30:48 +0000 (0:00:00.646) 0:03:47.784 ********** 2026-04-05 00:30:48.821189 | orchestrator | changed: [testbed-manager] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1775347484.2157824, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-05 00:30:48.821200 | orchestrator | changed: [testbed-node-2] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1775347530.1884427, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-05 00:30:48.821206 | orchestrator | changed: [testbed-node-1] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1775347531.6501067, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-05 00:30:48.821224 | orchestrator | changed: [testbed-node-0] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1775347522.7257693, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-05 00:30:54.427190 | orchestrator | changed: [testbed-node-4] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1775347534.9882457, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-05 00:30:54.427326 | orchestrator | changed: [testbed-node-3] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1775347515.7336085, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-05 00:30:54.427344 | orchestrator | changed: [testbed-node-5] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1775347522.8924322, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-05 00:30:54.427374 | orchestrator | changed: [testbed-manager] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-05 00:30:54.427405 | orchestrator | changed: [testbed-node-2] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-05 00:30:54.427417 | orchestrator | changed: [testbed-node-4] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-05 00:30:54.427429 | orchestrator | changed: [testbed-node-1] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-05 00:30:54.427468 | orchestrator | changed: [testbed-node-0] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-05 00:30:54.427505 | orchestrator | changed: [testbed-node-3] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-05 00:30:54.427517 | orchestrator | changed: [testbed-node-5] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-05 00:30:54.427530 | orchestrator | 2026-04-05 00:30:54.427543 | orchestrator | TASK [osism.commons.motd : Copy motd file] ************************************* 2026-04-05 00:30:54.427556 | orchestrator | Sunday 05 April 2026 00:30:49 +0000 (0:00:01.015) 0:03:48.799 ********** 2026-04-05 00:30:54.427567 | orchestrator | changed: [testbed-manager] 2026-04-05 00:30:54.427580 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:30:54.427599 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:30:54.427611 | orchestrator | changed: [testbed-node-3] 2026-04-05 00:30:54.427622 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:30:54.427633 | orchestrator | changed: [testbed-node-5] 2026-04-05 00:30:54.427644 | orchestrator | changed: [testbed-node-4] 2026-04-05 00:30:54.427655 | orchestrator | 2026-04-05 00:30:54.427667 | orchestrator | TASK [osism.commons.motd : Copy issue file] ************************************ 2026-04-05 00:30:54.427678 | orchestrator | Sunday 05 April 2026 00:30:50 +0000 (0:00:01.191) 0:03:49.990 ********** 2026-04-05 00:30:54.427689 | orchestrator | changed: [testbed-manager] 2026-04-05 00:30:54.427701 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:30:54.427717 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:30:54.427730 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:30:54.427744 | orchestrator | changed: [testbed-node-3] 2026-04-05 00:30:54.427757 | orchestrator | changed: [testbed-node-4] 2026-04-05 00:30:54.427769 | orchestrator | changed: [testbed-node-5] 2026-04-05 00:30:54.427781 | orchestrator | 2026-04-05 00:30:54.427794 | orchestrator | TASK [osism.commons.motd : Copy issue.net file] ******************************** 2026-04-05 00:30:54.427806 | orchestrator | Sunday 05 April 2026 00:30:51 +0000 (0:00:01.190) 0:03:51.181 ********** 2026-04-05 00:30:54.427818 | orchestrator | changed: [testbed-manager] 2026-04-05 00:30:54.427831 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:30:54.427843 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:30:54.427856 | orchestrator | changed: [testbed-node-4] 2026-04-05 00:30:54.427868 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:30:54.427880 | orchestrator | changed: [testbed-node-3] 2026-04-05 00:30:54.427892 | orchestrator | changed: [testbed-node-5] 2026-04-05 00:30:54.427904 | orchestrator | 2026-04-05 00:30:54.427917 | orchestrator | TASK [osism.commons.motd : Configure SSH to print the motd] ******************** 2026-04-05 00:30:54.427930 | orchestrator | Sunday 05 April 2026 00:30:52 +0000 (0:00:01.330) 0:03:52.512 ********** 2026-04-05 00:30:54.427942 | orchestrator | skipping: [testbed-manager] 2026-04-05 00:30:54.427955 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:30:54.427967 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:30:54.427980 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:30:54.427992 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:30:54.428004 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:30:54.428016 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:30:54.428028 | orchestrator | 2026-04-05 00:30:54.428041 | orchestrator | TASK [osism.commons.motd : Configure SSH to not print the motd] **************** 2026-04-05 00:30:54.428053 | orchestrator | Sunday 05 April 2026 00:30:53 +0000 (0:00:00.295) 0:03:52.808 ********** 2026-04-05 00:30:54.428067 | orchestrator | ok: [testbed-manager] 2026-04-05 00:30:54.428081 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:30:54.428092 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:30:54.428103 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:30:54.428114 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:30:54.428124 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:30:54.428135 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:30:54.428146 | orchestrator | 2026-04-05 00:30:54.428157 | orchestrator | TASK [osism.services.rng : Include distribution specific install tasks] ******** 2026-04-05 00:30:54.428168 | orchestrator | Sunday 05 April 2026 00:30:53 +0000 (0:00:00.749) 0:03:53.558 ********** 2026-04-05 00:30:54.428183 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rng/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-05 00:30:54.428196 | orchestrator | 2026-04-05 00:30:54.428208 | orchestrator | TASK [osism.services.rng : Install rng package] ******************************** 2026-04-05 00:30:54.428226 | orchestrator | Sunday 05 April 2026 00:30:54 +0000 (0:00:00.448) 0:03:54.006 ********** 2026-04-05 00:32:12.558834 | orchestrator | ok: [testbed-manager] 2026-04-05 00:32:12.558932 | orchestrator | changed: [testbed-node-4] 2026-04-05 00:32:12.558952 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:32:12.558995 | orchestrator | changed: [testbed-node-3] 2026-04-05 00:32:12.559010 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:32:12.559024 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:32:12.559038 | orchestrator | changed: [testbed-node-5] 2026-04-05 00:32:12.559054 | orchestrator | 2026-04-05 00:32:12.559063 | orchestrator | TASK [osism.services.rng : Remove haveged package] ***************************** 2026-04-05 00:32:12.559073 | orchestrator | Sunday 05 April 2026 00:31:03 +0000 (0:00:08.607) 0:04:02.614 ********** 2026-04-05 00:32:12.559081 | orchestrator | ok: [testbed-manager] 2026-04-05 00:32:12.559089 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:32:12.559097 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:32:12.559105 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:32:12.559112 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:32:12.559120 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:32:12.559128 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:32:12.559136 | orchestrator | 2026-04-05 00:32:12.559144 | orchestrator | TASK [osism.services.rng : Manage rng service] ********************************* 2026-04-05 00:32:12.559152 | orchestrator | Sunday 05 April 2026 00:31:04 +0000 (0:00:01.252) 0:04:03.866 ********** 2026-04-05 00:32:12.559160 | orchestrator | ok: [testbed-manager] 2026-04-05 00:32:12.559168 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:32:12.559175 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:32:12.559183 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:32:12.559191 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:32:12.559199 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:32:12.559207 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:32:12.559214 | orchestrator | 2026-04-05 00:32:12.559222 | orchestrator | TASK [osism.commons.cleanup : Gather variables for each operating system] ****** 2026-04-05 00:32:12.559230 | orchestrator | Sunday 05 April 2026 00:31:05 +0000 (0:00:01.029) 0:04:04.896 ********** 2026-04-05 00:32:12.559238 | orchestrator | ok: [testbed-manager] 2026-04-05 00:32:12.559246 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:32:12.559253 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:32:12.559261 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:32:12.559269 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:32:12.559277 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:32:12.559284 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:32:12.559292 | orchestrator | 2026-04-05 00:32:12.559300 | orchestrator | TASK [osism.commons.cleanup : Set cleanup_packages_distribution variable to default value] *** 2026-04-05 00:32:12.559309 | orchestrator | Sunday 05 April 2026 00:31:05 +0000 (0:00:00.301) 0:04:05.198 ********** 2026-04-05 00:32:12.559317 | orchestrator | ok: [testbed-manager] 2026-04-05 00:32:12.559325 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:32:12.559332 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:32:12.559342 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:32:12.559351 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:32:12.559360 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:32:12.559369 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:32:12.559378 | orchestrator | 2026-04-05 00:32:12.559387 | orchestrator | TASK [osism.commons.cleanup : Set cleanup_services_distribution variable to default value] *** 2026-04-05 00:32:12.559397 | orchestrator | Sunday 05 April 2026 00:31:05 +0000 (0:00:00.288) 0:04:05.486 ********** 2026-04-05 00:32:12.559406 | orchestrator | ok: [testbed-manager] 2026-04-05 00:32:12.559415 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:32:12.559424 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:32:12.559433 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:32:12.559443 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:32:12.559452 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:32:12.559461 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:32:12.559470 | orchestrator | 2026-04-05 00:32:12.559479 | orchestrator | TASK [osism.commons.cleanup : Populate service facts] ************************** 2026-04-05 00:32:12.559488 | orchestrator | Sunday 05 April 2026 00:31:06 +0000 (0:00:00.310) 0:04:05.797 ********** 2026-04-05 00:32:12.559498 | orchestrator | ok: [testbed-manager] 2026-04-05 00:32:12.559507 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:32:12.559516 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:32:12.559532 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:32:12.559541 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:32:12.559550 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:32:12.559559 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:32:12.559568 | orchestrator | 2026-04-05 00:32:12.559577 | orchestrator | TASK [osism.commons.cleanup : Include distribution specific timer tasks] ******* 2026-04-05 00:32:12.559587 | orchestrator | Sunday 05 April 2026 00:31:11 +0000 (0:00:05.634) 0:04:11.432 ********** 2026-04-05 00:32:12.559598 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/timers-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-05 00:32:12.559611 | orchestrator | 2026-04-05 00:32:12.559619 | orchestrator | TASK [osism.commons.cleanup : Disable apt-daily timers] ************************ 2026-04-05 00:32:12.559627 | orchestrator | Sunday 05 April 2026 00:31:12 +0000 (0:00:00.409) 0:04:11.841 ********** 2026-04-05 00:32:12.559635 | orchestrator | skipping: [testbed-manager] => (item=apt-daily-upgrade)  2026-04-05 00:32:12.559643 | orchestrator | skipping: [testbed-manager] => (item=apt-daily)  2026-04-05 00:32:12.559651 | orchestrator | skipping: [testbed-node-0] => (item=apt-daily-upgrade)  2026-04-05 00:32:12.559659 | orchestrator | skipping: [testbed-manager] 2026-04-05 00:32:12.559667 | orchestrator | skipping: [testbed-node-0] => (item=apt-daily)  2026-04-05 00:32:12.559675 | orchestrator | skipping: [testbed-node-1] => (item=apt-daily-upgrade)  2026-04-05 00:32:12.559683 | orchestrator | skipping: [testbed-node-1] => (item=apt-daily)  2026-04-05 00:32:12.559714 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:32:12.559723 | orchestrator | skipping: [testbed-node-2] => (item=apt-daily-upgrade)  2026-04-05 00:32:12.559731 | orchestrator | skipping: [testbed-node-2] => (item=apt-daily)  2026-04-05 00:32:12.559739 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:32:12.559750 | orchestrator | skipping: [testbed-node-3] => (item=apt-daily-upgrade)  2026-04-05 00:32:12.559763 | orchestrator | skipping: [testbed-node-3] => (item=apt-daily)  2026-04-05 00:32:12.559777 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:32:12.559785 | orchestrator | skipping: [testbed-node-4] => (item=apt-daily-upgrade)  2026-04-05 00:32:12.559793 | orchestrator | skipping: [testbed-node-4] => (item=apt-daily)  2026-04-05 00:32:12.559816 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:32:12.559824 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:32:12.559833 | orchestrator | skipping: [testbed-node-5] => (item=apt-daily-upgrade)  2026-04-05 00:32:12.559841 | orchestrator | skipping: [testbed-node-5] => (item=apt-daily)  2026-04-05 00:32:12.559849 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:32:12.559857 | orchestrator | 2026-04-05 00:32:12.559865 | orchestrator | TASK [osism.commons.cleanup : Include service tasks] *************************** 2026-04-05 00:32:12.559873 | orchestrator | Sunday 05 April 2026 00:31:12 +0000 (0:00:00.334) 0:04:12.176 ********** 2026-04-05 00:32:12.559881 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/services-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-05 00:32:12.559889 | orchestrator | 2026-04-05 00:32:12.559897 | orchestrator | TASK [osism.commons.cleanup : Cleanup services] ******************************** 2026-04-05 00:32:12.559905 | orchestrator | Sunday 05 April 2026 00:31:13 +0000 (0:00:00.574) 0:04:12.751 ********** 2026-04-05 00:32:12.559914 | orchestrator | skipping: [testbed-manager] => (item=ModemManager.service)  2026-04-05 00:32:12.559921 | orchestrator | skipping: [testbed-node-0] => (item=ModemManager.service)  2026-04-05 00:32:12.559930 | orchestrator | skipping: [testbed-manager] 2026-04-05 00:32:12.559937 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:32:12.559945 | orchestrator | skipping: [testbed-node-1] => (item=ModemManager.service)  2026-04-05 00:32:12.559953 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:32:12.559968 | orchestrator | skipping: [testbed-node-2] => (item=ModemManager.service)  2026-04-05 00:32:12.559976 | orchestrator | skipping: [testbed-node-3] => (item=ModemManager.service)  2026-04-05 00:32:12.559983 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:32:12.559991 | orchestrator | skipping: [testbed-node-4] => (item=ModemManager.service)  2026-04-05 00:32:12.559999 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:32:12.560007 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:32:12.560015 | orchestrator | skipping: [testbed-node-5] => (item=ModemManager.service)  2026-04-05 00:32:12.560023 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:32:12.560031 | orchestrator | 2026-04-05 00:32:12.560039 | orchestrator | TASK [osism.commons.cleanup : Include packages tasks] ************************** 2026-04-05 00:32:12.560047 | orchestrator | Sunday 05 April 2026 00:31:13 +0000 (0:00:00.344) 0:04:13.095 ********** 2026-04-05 00:32:12.560076 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/packages-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-05 00:32:12.560090 | orchestrator | 2026-04-05 00:32:12.560102 | orchestrator | TASK [osism.commons.cleanup : Cleanup installed packages] ********************** 2026-04-05 00:32:12.560128 | orchestrator | Sunday 05 April 2026 00:31:13 +0000 (0:00:00.434) 0:04:13.530 ********** 2026-04-05 00:32:12.560144 | orchestrator | changed: [testbed-node-4] 2026-04-05 00:32:12.560156 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:32:12.560169 | orchestrator | changed: [testbed-manager] 2026-04-05 00:32:12.560180 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:32:12.560191 | orchestrator | changed: [testbed-node-5] 2026-04-05 00:32:12.560203 | orchestrator | changed: [testbed-node-3] 2026-04-05 00:32:12.560215 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:32:12.560228 | orchestrator | 2026-04-05 00:32:12.560241 | orchestrator | TASK [osism.commons.cleanup : Remove cloudinit package] ************************ 2026-04-05 00:32:12.560254 | orchestrator | Sunday 05 April 2026 00:31:49 +0000 (0:00:35.325) 0:04:48.855 ********** 2026-04-05 00:32:12.560267 | orchestrator | changed: [testbed-manager] 2026-04-05 00:32:12.560281 | orchestrator | changed: [testbed-node-4] 2026-04-05 00:32:12.560295 | orchestrator | changed: [testbed-node-3] 2026-04-05 00:32:12.560309 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:32:12.560324 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:32:12.560337 | orchestrator | changed: [testbed-node-5] 2026-04-05 00:32:12.560351 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:32:12.560365 | orchestrator | 2026-04-05 00:32:12.560379 | orchestrator | TASK [osism.commons.cleanup : Uninstall unattended-upgrades package] *********** 2026-04-05 00:32:12.560392 | orchestrator | Sunday 05 April 2026 00:31:57 +0000 (0:00:08.253) 0:04:57.109 ********** 2026-04-05 00:32:12.560406 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:32:12.560420 | orchestrator | changed: [testbed-node-4] 2026-04-05 00:32:12.560434 | orchestrator | changed: [testbed-manager] 2026-04-05 00:32:12.560447 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:32:12.560461 | orchestrator | changed: [testbed-node-5] 2026-04-05 00:32:12.560475 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:32:12.560488 | orchestrator | changed: [testbed-node-3] 2026-04-05 00:32:12.560502 | orchestrator | 2026-04-05 00:32:12.560516 | orchestrator | TASK [osism.commons.cleanup : Remove useless packages from the cache] ********** 2026-04-05 00:32:12.560529 | orchestrator | Sunday 05 April 2026 00:32:05 +0000 (0:00:07.565) 0:05:04.674 ********** 2026-04-05 00:32:12.560544 | orchestrator | ok: [testbed-manager] 2026-04-05 00:32:12.560559 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:32:12.560571 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:32:12.560583 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:32:12.560596 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:32:12.560609 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:32:12.560622 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:32:12.560635 | orchestrator | 2026-04-05 00:32:12.560648 | orchestrator | TASK [osism.commons.cleanup : Remove dependencies that are no longer required] *** 2026-04-05 00:32:12.560675 | orchestrator | Sunday 05 April 2026 00:32:06 +0000 (0:00:01.723) 0:05:06.398 ********** 2026-04-05 00:32:12.560715 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:32:12.560729 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:32:12.560743 | orchestrator | changed: [testbed-node-4] 2026-04-05 00:32:12.560758 | orchestrator | changed: [testbed-manager] 2026-04-05 00:32:12.560772 | orchestrator | changed: [testbed-node-3] 2026-04-05 00:32:12.560787 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:32:12.560800 | orchestrator | changed: [testbed-node-5] 2026-04-05 00:32:12.560814 | orchestrator | 2026-04-05 00:32:12.560841 | orchestrator | TASK [osism.commons.cleanup : Include cloudinit tasks] ************************* 2026-04-05 00:32:24.022626 | orchestrator | Sunday 05 April 2026 00:32:12 +0000 (0:00:05.740) 0:05:12.138 ********** 2026-04-05 00:32:24.022764 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/cloudinit.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-05 00:32:24.022779 | orchestrator | 2026-04-05 00:32:24.022789 | orchestrator | TASK [osism.commons.cleanup : Remove cloud-init configuration directory] ******* 2026-04-05 00:32:24.022796 | orchestrator | Sunday 05 April 2026 00:32:13 +0000 (0:00:00.521) 0:05:12.660 ********** 2026-04-05 00:32:24.022804 | orchestrator | changed: [testbed-manager] 2026-04-05 00:32:24.022814 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:32:24.022821 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:32:24.022829 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:32:24.022836 | orchestrator | changed: [testbed-node-3] 2026-04-05 00:32:24.022861 | orchestrator | changed: [testbed-node-4] 2026-04-05 00:32:24.022875 | orchestrator | changed: [testbed-node-5] 2026-04-05 00:32:24.022902 | orchestrator | 2026-04-05 00:32:24.022914 | orchestrator | TASK [osism.commons.timezone : Install tzdata package] ************************* 2026-04-05 00:32:24.022927 | orchestrator | Sunday 05 April 2026 00:32:13 +0000 (0:00:00.732) 0:05:13.393 ********** 2026-04-05 00:32:24.022939 | orchestrator | ok: [testbed-manager] 2026-04-05 00:32:24.022953 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:32:24.022965 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:32:24.022977 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:32:24.022989 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:32:24.022997 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:32:24.023004 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:32:24.023011 | orchestrator | 2026-04-05 00:32:24.023019 | orchestrator | TASK [osism.commons.timezone : Set timezone to UTC] **************************** 2026-04-05 00:32:24.023026 | orchestrator | Sunday 05 April 2026 00:32:15 +0000 (0:00:01.847) 0:05:15.240 ********** 2026-04-05 00:32:24.023034 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:32:24.023041 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:32:24.023048 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:32:24.023056 | orchestrator | changed: [testbed-node-4] 2026-04-05 00:32:24.023063 | orchestrator | changed: [testbed-node-3] 2026-04-05 00:32:24.023070 | orchestrator | changed: [testbed-manager] 2026-04-05 00:32:24.023077 | orchestrator | changed: [testbed-node-5] 2026-04-05 00:32:24.023084 | orchestrator | 2026-04-05 00:32:24.023092 | orchestrator | TASK [osism.commons.timezone : Create /etc/adjtime file] *********************** 2026-04-05 00:32:24.023099 | orchestrator | Sunday 05 April 2026 00:32:16 +0000 (0:00:00.746) 0:05:15.987 ********** 2026-04-05 00:32:24.023106 | orchestrator | skipping: [testbed-manager] 2026-04-05 00:32:24.023113 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:32:24.023120 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:32:24.023128 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:32:24.023135 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:32:24.023142 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:32:24.023149 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:32:24.023156 | orchestrator | 2026-04-05 00:32:24.023164 | orchestrator | TASK [osism.commons.timezone : Ensure UTC in /etc/adjtime] ********************* 2026-04-05 00:32:24.023204 | orchestrator | Sunday 05 April 2026 00:32:16 +0000 (0:00:00.304) 0:05:16.291 ********** 2026-04-05 00:32:24.023213 | orchestrator | skipping: [testbed-manager] 2026-04-05 00:32:24.023222 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:32:24.023231 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:32:24.023239 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:32:24.023248 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:32:24.023256 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:32:24.023264 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:32:24.023274 | orchestrator | 2026-04-05 00:32:24.023282 | orchestrator | TASK [osism.services.docker : Gather variables for each operating system] ****** 2026-04-05 00:32:24.023291 | orchestrator | Sunday 05 April 2026 00:32:17 +0000 (0:00:00.468) 0:05:16.760 ********** 2026-04-05 00:32:24.023300 | orchestrator | ok: [testbed-manager] 2026-04-05 00:32:24.023309 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:32:24.023317 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:32:24.023325 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:32:24.023333 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:32:24.023342 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:32:24.023350 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:32:24.023358 | orchestrator | 2026-04-05 00:32:24.023366 | orchestrator | TASK [osism.services.docker : Set docker_version variable to default value] **** 2026-04-05 00:32:24.023375 | orchestrator | Sunday 05 April 2026 00:32:17 +0000 (0:00:00.431) 0:05:17.192 ********** 2026-04-05 00:32:24.023383 | orchestrator | skipping: [testbed-manager] 2026-04-05 00:32:24.023391 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:32:24.023400 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:32:24.023408 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:32:24.023416 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:32:24.023425 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:32:24.023433 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:32:24.023441 | orchestrator | 2026-04-05 00:32:24.023450 | orchestrator | TASK [osism.services.docker : Set docker_cli_version variable to default value] *** 2026-04-05 00:32:24.023459 | orchestrator | Sunday 05 April 2026 00:32:17 +0000 (0:00:00.283) 0:05:17.475 ********** 2026-04-05 00:32:24.023468 | orchestrator | ok: [testbed-manager] 2026-04-05 00:32:24.023476 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:32:24.023485 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:32:24.023494 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:32:24.023502 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:32:24.023510 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:32:24.023518 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:32:24.023526 | orchestrator | 2026-04-05 00:32:24.023536 | orchestrator | TASK [osism.services.docker : Print used docker version] *********************** 2026-04-05 00:32:24.023548 | orchestrator | Sunday 05 April 2026 00:32:18 +0000 (0:00:00.307) 0:05:17.782 ********** 2026-04-05 00:32:24.023560 | orchestrator | ok: [testbed-manager] =>  2026-04-05 00:32:24.023571 | orchestrator |  docker_version: 5:27.5.1 2026-04-05 00:32:24.023584 | orchestrator | ok: [testbed-node-0] =>  2026-04-05 00:32:24.023595 | orchestrator |  docker_version: 5:27.5.1 2026-04-05 00:32:24.023606 | orchestrator | ok: [testbed-node-1] =>  2026-04-05 00:32:24.023617 | orchestrator |  docker_version: 5:27.5.1 2026-04-05 00:32:24.023627 | orchestrator | ok: [testbed-node-2] =>  2026-04-05 00:32:24.023638 | orchestrator |  docker_version: 5:27.5.1 2026-04-05 00:32:24.023669 | orchestrator | ok: [testbed-node-3] =>  2026-04-05 00:32:24.023682 | orchestrator |  docker_version: 5:27.5.1 2026-04-05 00:32:24.023693 | orchestrator | ok: [testbed-node-4] =>  2026-04-05 00:32:24.023705 | orchestrator |  docker_version: 5:27.5.1 2026-04-05 00:32:24.023716 | orchestrator | ok: [testbed-node-5] =>  2026-04-05 00:32:24.023753 | orchestrator |  docker_version: 5:27.5.1 2026-04-05 00:32:24.023765 | orchestrator | 2026-04-05 00:32:24.023777 | orchestrator | TASK [osism.services.docker : Print used docker cli version] ******************* 2026-04-05 00:32:24.023789 | orchestrator | Sunday 05 April 2026 00:32:18 +0000 (0:00:00.318) 0:05:18.101 ********** 2026-04-05 00:32:24.023813 | orchestrator | ok: [testbed-manager] =>  2026-04-05 00:32:24.023825 | orchestrator |  docker_cli_version: 5:27.5.1 2026-04-05 00:32:24.023837 | orchestrator | ok: [testbed-node-0] =>  2026-04-05 00:32:24.023848 | orchestrator |  docker_cli_version: 5:27.5.1 2026-04-05 00:32:24.023860 | orchestrator | ok: [testbed-node-1] =>  2026-04-05 00:32:24.023871 | orchestrator |  docker_cli_version: 5:27.5.1 2026-04-05 00:32:24.023884 | orchestrator | ok: [testbed-node-2] =>  2026-04-05 00:32:24.023896 | orchestrator |  docker_cli_version: 5:27.5.1 2026-04-05 00:32:24.023908 | orchestrator | ok: [testbed-node-3] =>  2026-04-05 00:32:24.023920 | orchestrator |  docker_cli_version: 5:27.5.1 2026-04-05 00:32:24.023932 | orchestrator | ok: [testbed-node-4] =>  2026-04-05 00:32:24.023943 | orchestrator |  docker_cli_version: 5:27.5.1 2026-04-05 00:32:24.023955 | orchestrator | ok: [testbed-node-5] =>  2026-04-05 00:32:24.023968 | orchestrator |  docker_cli_version: 5:27.5.1 2026-04-05 00:32:24.023980 | orchestrator | 2026-04-05 00:32:24.023992 | orchestrator | TASK [osism.services.docker : Include block storage tasks] ********************* 2026-04-05 00:32:24.024004 | orchestrator | Sunday 05 April 2026 00:32:18 +0000 (0:00:00.298) 0:05:18.399 ********** 2026-04-05 00:32:24.024015 | orchestrator | skipping: [testbed-manager] 2026-04-05 00:32:24.024028 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:32:24.024039 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:32:24.024051 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:32:24.024063 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:32:24.024076 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:32:24.024088 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:32:24.024100 | orchestrator | 2026-04-05 00:32:24.024113 | orchestrator | TASK [osism.services.docker : Include zram storage tasks] ********************** 2026-04-05 00:32:24.024127 | orchestrator | Sunday 05 April 2026 00:32:19 +0000 (0:00:00.296) 0:05:18.696 ********** 2026-04-05 00:32:24.024138 | orchestrator | skipping: [testbed-manager] 2026-04-05 00:32:24.024150 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:32:24.024163 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:32:24.024176 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:32:24.024188 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:32:24.024201 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:32:24.024213 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:32:24.024225 | orchestrator | 2026-04-05 00:32:24.024238 | orchestrator | TASK [osism.services.docker : Include docker install tasks] ******************** 2026-04-05 00:32:24.024250 | orchestrator | Sunday 05 April 2026 00:32:19 +0000 (0:00:00.283) 0:05:18.979 ********** 2026-04-05 00:32:24.024275 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/install-docker-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-05 00:32:24.024291 | orchestrator | 2026-04-05 00:32:24.024305 | orchestrator | TASK [osism.services.docker : Remove old architecture-dependent repository] **** 2026-04-05 00:32:24.024318 | orchestrator | Sunday 05 April 2026 00:32:19 +0000 (0:00:00.444) 0:05:19.424 ********** 2026-04-05 00:32:24.024331 | orchestrator | ok: [testbed-manager] 2026-04-05 00:32:24.024346 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:32:24.024359 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:32:24.024372 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:32:24.024384 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:32:24.024397 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:32:24.024410 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:32:24.024423 | orchestrator | 2026-04-05 00:32:24.024436 | orchestrator | TASK [osism.services.docker : Gather package facts] **************************** 2026-04-05 00:32:24.024449 | orchestrator | Sunday 05 April 2026 00:32:20 +0000 (0:00:00.762) 0:05:20.186 ********** 2026-04-05 00:32:24.024462 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:32:24.024474 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:32:24.024487 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:32:24.024499 | orchestrator | ok: [testbed-manager] 2026-04-05 00:32:24.024520 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:32:24.024533 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:32:24.024546 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:32:24.024559 | orchestrator | 2026-04-05 00:32:24.024571 | orchestrator | TASK [osism.services.docker : Check whether packages are installed that should not be installed] *** 2026-04-05 00:32:24.024585 | orchestrator | Sunday 05 April 2026 00:32:23 +0000 (0:00:02.977) 0:05:23.164 ********** 2026-04-05 00:32:24.024598 | orchestrator | skipping: [testbed-manager] => (item=containerd)  2026-04-05 00:32:24.024612 | orchestrator | skipping: [testbed-manager] => (item=docker.io)  2026-04-05 00:32:24.024626 | orchestrator | skipping: [testbed-manager] => (item=docker-engine)  2026-04-05 00:32:24.024639 | orchestrator | skipping: [testbed-manager] 2026-04-05 00:32:24.024653 | orchestrator | skipping: [testbed-node-0] => (item=containerd)  2026-04-05 00:32:24.024667 | orchestrator | skipping: [testbed-node-0] => (item=docker.io)  2026-04-05 00:32:24.024679 | orchestrator | skipping: [testbed-node-0] => (item=docker-engine)  2026-04-05 00:32:24.024692 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:32:24.024706 | orchestrator | skipping: [testbed-node-1] => (item=containerd)  2026-04-05 00:32:24.024808 | orchestrator | skipping: [testbed-node-1] => (item=docker.io)  2026-04-05 00:32:24.024825 | orchestrator | skipping: [testbed-node-1] => (item=docker-engine)  2026-04-05 00:32:24.024837 | orchestrator | skipping: [testbed-node-2] => (item=containerd)  2026-04-05 00:32:24.024850 | orchestrator | skipping: [testbed-node-2] => (item=docker.io)  2026-04-05 00:32:24.024862 | orchestrator | skipping: [testbed-node-2] => (item=docker-engine)  2026-04-05 00:32:24.024875 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:32:24.024888 | orchestrator | skipping: [testbed-node-3] => (item=containerd)  2026-04-05 00:32:24.024917 | orchestrator | skipping: [testbed-node-3] => (item=docker.io)  2026-04-05 00:33:25.743518 | orchestrator | skipping: [testbed-node-3] => (item=docker-engine)  2026-04-05 00:33:25.743628 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:33:25.743643 | orchestrator | skipping: [testbed-node-4] => (item=containerd)  2026-04-05 00:33:25.743654 | orchestrator | skipping: [testbed-node-4] => (item=docker.io)  2026-04-05 00:33:25.743664 | orchestrator | skipping: [testbed-node-4] => (item=docker-engine)  2026-04-05 00:33:25.743674 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:33:25.743684 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:33:25.743695 | orchestrator | skipping: [testbed-node-5] => (item=containerd)  2026-04-05 00:33:25.743704 | orchestrator | skipping: [testbed-node-5] => (item=docker.io)  2026-04-05 00:33:25.743714 | orchestrator | skipping: [testbed-node-5] => (item=docker-engine)  2026-04-05 00:33:25.743724 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:33:25.743734 | orchestrator | 2026-04-05 00:33:25.743745 | orchestrator | TASK [osism.services.docker : Install apt-transport-https package] ************* 2026-04-05 00:33:25.743756 | orchestrator | Sunday 05 April 2026 00:32:24 +0000 (0:00:00.692) 0:05:23.857 ********** 2026-04-05 00:33:25.743766 | orchestrator | ok: [testbed-manager] 2026-04-05 00:33:25.743776 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:33:25.743786 | orchestrator | changed: [testbed-node-4] 2026-04-05 00:33:25.743796 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:33:25.743805 | orchestrator | changed: [testbed-node-3] 2026-04-05 00:33:25.743815 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:33:25.743825 | orchestrator | changed: [testbed-node-5] 2026-04-05 00:33:25.743834 | orchestrator | 2026-04-05 00:33:25.743844 | orchestrator | TASK [osism.services.docker : Add repository gpg key] ************************** 2026-04-05 00:33:25.743854 | orchestrator | Sunday 05 April 2026 00:32:30 +0000 (0:00:06.538) 0:05:30.395 ********** 2026-04-05 00:33:25.743886 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:33:25.743897 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:33:25.743907 | orchestrator | ok: [testbed-manager] 2026-04-05 00:33:25.743916 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:33:25.743926 | orchestrator | changed: [testbed-node-3] 2026-04-05 00:33:25.743958 | orchestrator | changed: [testbed-node-4] 2026-04-05 00:33:25.743969 | orchestrator | changed: [testbed-node-5] 2026-04-05 00:33:25.743979 | orchestrator | 2026-04-05 00:33:25.743989 | orchestrator | TASK [osism.services.docker : Add repository] ********************************** 2026-04-05 00:33:25.743998 | orchestrator | Sunday 05 April 2026 00:32:31 +0000 (0:00:01.055) 0:05:31.451 ********** 2026-04-05 00:33:25.744008 | orchestrator | ok: [testbed-manager] 2026-04-05 00:33:25.744018 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:33:25.744028 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:33:25.744037 | orchestrator | changed: [testbed-node-4] 2026-04-05 00:33:25.744047 | orchestrator | changed: [testbed-node-3] 2026-04-05 00:33:25.744059 | orchestrator | changed: [testbed-node-5] 2026-04-05 00:33:25.744071 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:33:25.744082 | orchestrator | 2026-04-05 00:33:25.744093 | orchestrator | TASK [osism.services.docker : Update package cache] **************************** 2026-04-05 00:33:25.744104 | orchestrator | Sunday 05 April 2026 00:32:39 +0000 (0:00:08.086) 0:05:39.537 ********** 2026-04-05 00:33:25.744116 | orchestrator | changed: [testbed-manager] 2026-04-05 00:33:25.744141 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:33:25.744151 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:33:25.744161 | orchestrator | changed: [testbed-node-3] 2026-04-05 00:33:25.744171 | orchestrator | changed: [testbed-node-4] 2026-04-05 00:33:25.744180 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:33:25.744190 | orchestrator | changed: [testbed-node-5] 2026-04-05 00:33:25.744199 | orchestrator | 2026-04-05 00:33:25.744209 | orchestrator | TASK [osism.services.docker : Pin docker package version] ********************** 2026-04-05 00:33:25.744219 | orchestrator | Sunday 05 April 2026 00:32:43 +0000 (0:00:03.634) 0:05:43.172 ********** 2026-04-05 00:33:25.744228 | orchestrator | ok: [testbed-manager] 2026-04-05 00:33:25.744238 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:33:25.744248 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:33:25.744257 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:33:25.744267 | orchestrator | changed: [testbed-node-3] 2026-04-05 00:33:25.744276 | orchestrator | changed: [testbed-node-4] 2026-04-05 00:33:25.744286 | orchestrator | changed: [testbed-node-5] 2026-04-05 00:33:25.744295 | orchestrator | 2026-04-05 00:33:25.744305 | orchestrator | TASK [osism.services.docker : Pin docker-cli package version] ****************** 2026-04-05 00:33:25.744315 | orchestrator | Sunday 05 April 2026 00:32:44 +0000 (0:00:01.384) 0:05:44.556 ********** 2026-04-05 00:33:25.744324 | orchestrator | ok: [testbed-manager] 2026-04-05 00:33:25.744334 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:33:25.744344 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:33:25.744353 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:33:25.744363 | orchestrator | changed: [testbed-node-3] 2026-04-05 00:33:25.744372 | orchestrator | changed: [testbed-node-4] 2026-04-05 00:33:25.744382 | orchestrator | changed: [testbed-node-5] 2026-04-05 00:33:25.744391 | orchestrator | 2026-04-05 00:33:25.744401 | orchestrator | TASK [osism.services.docker : Unlock containerd package] *********************** 2026-04-05 00:33:25.744411 | orchestrator | Sunday 05 April 2026 00:32:46 +0000 (0:00:01.337) 0:05:45.894 ********** 2026-04-05 00:33:25.744421 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:33:25.744431 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:33:25.744440 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:33:25.744450 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:33:25.744460 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:33:25.744469 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:33:25.744479 | orchestrator | changed: [testbed-manager] 2026-04-05 00:33:25.744489 | orchestrator | 2026-04-05 00:33:25.744499 | orchestrator | TASK [osism.services.docker : Install containerd package] ********************** 2026-04-05 00:33:25.744508 | orchestrator | Sunday 05 April 2026 00:32:46 +0000 (0:00:00.623) 0:05:46.518 ********** 2026-04-05 00:33:25.744518 | orchestrator | ok: [testbed-manager] 2026-04-05 00:33:25.744527 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:33:25.744537 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:33:25.744554 | orchestrator | changed: [testbed-node-4] 2026-04-05 00:33:25.744564 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:33:25.744573 | orchestrator | changed: [testbed-node-3] 2026-04-05 00:33:25.744583 | orchestrator | changed: [testbed-node-5] 2026-04-05 00:33:25.744592 | orchestrator | 2026-04-05 00:33:25.744602 | orchestrator | TASK [osism.services.docker : Lock containerd package] ************************* 2026-04-05 00:33:25.744627 | orchestrator | Sunday 05 April 2026 00:32:56 +0000 (0:00:09.481) 0:05:55.999 ********** 2026-04-05 00:33:25.744638 | orchestrator | changed: [testbed-manager] 2026-04-05 00:33:25.744648 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:33:25.744657 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:33:25.744667 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:33:25.744676 | orchestrator | changed: [testbed-node-3] 2026-04-05 00:33:25.744686 | orchestrator | changed: [testbed-node-4] 2026-04-05 00:33:25.744695 | orchestrator | changed: [testbed-node-5] 2026-04-05 00:33:25.744705 | orchestrator | 2026-04-05 00:33:25.744714 | orchestrator | TASK [osism.services.docker : Install docker-cli package] ********************** 2026-04-05 00:33:25.744724 | orchestrator | Sunday 05 April 2026 00:32:57 +0000 (0:00:01.221) 0:05:57.221 ********** 2026-04-05 00:33:25.744734 | orchestrator | ok: [testbed-manager] 2026-04-05 00:33:25.744743 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:33:25.744753 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:33:25.744762 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:33:25.744772 | orchestrator | changed: [testbed-node-4] 2026-04-05 00:33:25.744781 | orchestrator | changed: [testbed-node-3] 2026-04-05 00:33:25.744791 | orchestrator | changed: [testbed-node-5] 2026-04-05 00:33:25.744801 | orchestrator | 2026-04-05 00:33:25.744810 | orchestrator | TASK [osism.services.docker : Install docker package] ************************** 2026-04-05 00:33:25.744820 | orchestrator | Sunday 05 April 2026 00:33:07 +0000 (0:00:09.453) 0:06:06.674 ********** 2026-04-05 00:33:25.744829 | orchestrator | ok: [testbed-manager] 2026-04-05 00:33:25.744839 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:33:25.744848 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:33:25.744858 | orchestrator | changed: [testbed-node-5] 2026-04-05 00:33:25.744885 | orchestrator | changed: [testbed-node-4] 2026-04-05 00:33:25.744895 | orchestrator | changed: [testbed-node-3] 2026-04-05 00:33:25.744904 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:33:25.744914 | orchestrator | 2026-04-05 00:33:25.744924 | orchestrator | TASK [osism.services.docker : Unblock installation of python docker packages] *** 2026-04-05 00:33:25.744933 | orchestrator | Sunday 05 April 2026 00:33:18 +0000 (0:00:11.276) 0:06:17.950 ********** 2026-04-05 00:33:25.744943 | orchestrator | ok: [testbed-manager] => (item=python3-docker) 2026-04-05 00:33:25.744953 | orchestrator | ok: [testbed-node-0] => (item=python3-docker) 2026-04-05 00:33:25.744963 | orchestrator | ok: [testbed-node-1] => (item=python3-docker) 2026-04-05 00:33:25.744972 | orchestrator | ok: [testbed-node-2] => (item=python3-docker) 2026-04-05 00:33:25.744982 | orchestrator | ok: [testbed-node-3] => (item=python3-docker) 2026-04-05 00:33:25.744992 | orchestrator | ok: [testbed-manager] => (item=python-docker) 2026-04-05 00:33:25.745001 | orchestrator | ok: [testbed-node-4] => (item=python3-docker) 2026-04-05 00:33:25.745011 | orchestrator | ok: [testbed-node-0] => (item=python-docker) 2026-04-05 00:33:25.745020 | orchestrator | ok: [testbed-node-5] => (item=python3-docker) 2026-04-05 00:33:25.745030 | orchestrator | ok: [testbed-node-1] => (item=python-docker) 2026-04-05 00:33:25.745039 | orchestrator | ok: [testbed-node-2] => (item=python-docker) 2026-04-05 00:33:25.745049 | orchestrator | ok: [testbed-node-3] => (item=python-docker) 2026-04-05 00:33:25.745059 | orchestrator | ok: [testbed-node-4] => (item=python-docker) 2026-04-05 00:33:25.745068 | orchestrator | ok: [testbed-node-5] => (item=python-docker) 2026-04-05 00:33:25.745078 | orchestrator | 2026-04-05 00:33:25.745088 | orchestrator | TASK [osism.services.docker : Install python3 docker package] ****************** 2026-04-05 00:33:25.745097 | orchestrator | Sunday 05 April 2026 00:33:19 +0000 (0:00:01.254) 0:06:19.205 ********** 2026-04-05 00:33:25.745114 | orchestrator | skipping: [testbed-manager] 2026-04-05 00:33:25.745124 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:33:25.745134 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:33:25.745144 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:33:25.745153 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:33:25.745163 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:33:25.745172 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:33:25.745182 | orchestrator | 2026-04-05 00:33:25.745192 | orchestrator | TASK [osism.services.docker : Install python3 docker package from Debian Sid] *** 2026-04-05 00:33:25.745201 | orchestrator | Sunday 05 April 2026 00:33:20 +0000 (0:00:00.786) 0:06:19.991 ********** 2026-04-05 00:33:25.745211 | orchestrator | ok: [testbed-manager] 2026-04-05 00:33:25.745221 | orchestrator | changed: [testbed-node-4] 2026-04-05 00:33:25.745230 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:33:25.745240 | orchestrator | changed: [testbed-node-3] 2026-04-05 00:33:25.745249 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:33:25.745259 | orchestrator | changed: [testbed-node-5] 2026-04-05 00:33:25.745268 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:33:25.745278 | orchestrator | 2026-04-05 00:33:25.745288 | orchestrator | TASK [osism.services.docker : Remove python docker packages (install python bindings from pip)] *** 2026-04-05 00:33:25.745298 | orchestrator | Sunday 05 April 2026 00:33:24 +0000 (0:00:04.493) 0:06:24.485 ********** 2026-04-05 00:33:25.745308 | orchestrator | skipping: [testbed-manager] 2026-04-05 00:33:25.745318 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:33:25.745327 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:33:25.745337 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:33:25.745346 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:33:25.745356 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:33:25.745366 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:33:25.745375 | orchestrator | 2026-04-05 00:33:25.745385 | orchestrator | TASK [osism.services.docker : Block installation of python docker packages (install python bindings from pip)] *** 2026-04-05 00:33:25.745396 | orchestrator | Sunday 05 April 2026 00:33:25 +0000 (0:00:00.553) 0:06:25.039 ********** 2026-04-05 00:33:25.745406 | orchestrator | skipping: [testbed-manager] => (item=python3-docker)  2026-04-05 00:33:25.745415 | orchestrator | skipping: [testbed-manager] => (item=python-docker)  2026-04-05 00:33:25.745425 | orchestrator | skipping: [testbed-manager] 2026-04-05 00:33:25.745434 | orchestrator | skipping: [testbed-node-0] => (item=python3-docker)  2026-04-05 00:33:25.745444 | orchestrator | skipping: [testbed-node-0] => (item=python-docker)  2026-04-05 00:33:25.745454 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:33:25.745463 | orchestrator | skipping: [testbed-node-1] => (item=python3-docker)  2026-04-05 00:33:25.745473 | orchestrator | skipping: [testbed-node-1] => (item=python-docker)  2026-04-05 00:33:25.745482 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:33:25.745498 | orchestrator | skipping: [testbed-node-2] => (item=python3-docker)  2026-04-05 00:33:45.691449 | orchestrator | skipping: [testbed-node-2] => (item=python-docker)  2026-04-05 00:33:45.691563 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:33:45.691578 | orchestrator | skipping: [testbed-node-3] => (item=python3-docker)  2026-04-05 00:33:45.691588 | orchestrator | skipping: [testbed-node-3] => (item=python-docker)  2026-04-05 00:33:45.691648 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:33:45.691660 | orchestrator | skipping: [testbed-node-4] => (item=python3-docker)  2026-04-05 00:33:45.691670 | orchestrator | skipping: [testbed-node-4] => (item=python-docker)  2026-04-05 00:33:45.691680 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:33:45.691690 | orchestrator | skipping: [testbed-node-5] => (item=python3-docker)  2026-04-05 00:33:45.691700 | orchestrator | skipping: [testbed-node-5] => (item=python-docker)  2026-04-05 00:33:45.691710 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:33:45.691720 | orchestrator | 2026-04-05 00:33:45.691732 | orchestrator | TASK [osism.services.docker : Install python3-pip package (install python bindings from pip)] *** 2026-04-05 00:33:45.691764 | orchestrator | Sunday 05 April 2026 00:33:26 +0000 (0:00:00.576) 0:06:25.616 ********** 2026-04-05 00:33:45.691774 | orchestrator | skipping: [testbed-manager] 2026-04-05 00:33:45.691784 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:33:45.691794 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:33:45.691803 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:33:45.691813 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:33:45.691823 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:33:45.691832 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:33:45.691842 | orchestrator | 2026-04-05 00:33:45.691852 | orchestrator | TASK [osism.services.docker : Install docker packages (install python bindings from pip)] *** 2026-04-05 00:33:45.691862 | orchestrator | Sunday 05 April 2026 00:33:26 +0000 (0:00:00.538) 0:06:26.155 ********** 2026-04-05 00:33:45.691872 | orchestrator | skipping: [testbed-manager] 2026-04-05 00:33:45.691882 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:33:45.691891 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:33:45.691948 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:33:45.691958 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:33:45.691969 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:33:45.691981 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:33:45.691992 | orchestrator | 2026-04-05 00:33:45.692003 | orchestrator | TASK [osism.services.docker : Install packages required by docker login] ******* 2026-04-05 00:33:45.692014 | orchestrator | Sunday 05 April 2026 00:33:27 +0000 (0:00:00.752) 0:06:26.907 ********** 2026-04-05 00:33:45.692025 | orchestrator | skipping: [testbed-manager] 2026-04-05 00:33:45.692036 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:33:45.692047 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:33:45.692058 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:33:45.692069 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:33:45.692080 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:33:45.692091 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:33:45.692101 | orchestrator | 2026-04-05 00:33:45.692113 | orchestrator | TASK [osism.services.docker : Ensure that some packages are not installed] ***** 2026-04-05 00:33:45.692129 | orchestrator | Sunday 05 April 2026 00:33:27 +0000 (0:00:00.585) 0:06:27.493 ********** 2026-04-05 00:33:45.692141 | orchestrator | ok: [testbed-manager] 2026-04-05 00:33:45.692152 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:33:45.692163 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:33:45.692173 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:33:45.692184 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:33:45.692194 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:33:45.692205 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:33:45.692216 | orchestrator | 2026-04-05 00:33:45.692227 | orchestrator | TASK [osism.services.docker : Include config tasks] **************************** 2026-04-05 00:33:45.692239 | orchestrator | Sunday 05 April 2026 00:33:29 +0000 (0:00:01.814) 0:06:29.308 ********** 2026-04-05 00:33:45.692251 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/config.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-05 00:33:45.692263 | orchestrator | 2026-04-05 00:33:45.692273 | orchestrator | TASK [osism.services.docker : Create plugins directory] ************************ 2026-04-05 00:33:45.692282 | orchestrator | Sunday 05 April 2026 00:33:30 +0000 (0:00:00.932) 0:06:30.240 ********** 2026-04-05 00:33:45.692292 | orchestrator | ok: [testbed-manager] 2026-04-05 00:33:45.692301 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:33:45.692311 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:33:45.692320 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:33:45.692330 | orchestrator | changed: [testbed-node-3] 2026-04-05 00:33:45.692340 | orchestrator | changed: [testbed-node-4] 2026-04-05 00:33:45.692349 | orchestrator | changed: [testbed-node-5] 2026-04-05 00:33:45.692359 | orchestrator | 2026-04-05 00:33:45.692368 | orchestrator | TASK [osism.services.docker : Create systemd overlay directory] **************** 2026-04-05 00:33:45.692386 | orchestrator | Sunday 05 April 2026 00:33:31 +0000 (0:00:01.036) 0:06:31.277 ********** 2026-04-05 00:33:45.692395 | orchestrator | ok: [testbed-manager] 2026-04-05 00:33:45.692405 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:33:45.692415 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:33:45.692424 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:33:45.692433 | orchestrator | changed: [testbed-node-3] 2026-04-05 00:33:45.692443 | orchestrator | changed: [testbed-node-4] 2026-04-05 00:33:45.692452 | orchestrator | changed: [testbed-node-5] 2026-04-05 00:33:45.692461 | orchestrator | 2026-04-05 00:33:45.692471 | orchestrator | TASK [osism.services.docker : Copy systemd overlay file] *********************** 2026-04-05 00:33:45.692481 | orchestrator | Sunday 05 April 2026 00:33:32 +0000 (0:00:00.841) 0:06:32.118 ********** 2026-04-05 00:33:45.692490 | orchestrator | ok: [testbed-manager] 2026-04-05 00:33:45.692499 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:33:45.692509 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:33:45.692518 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:33:45.692527 | orchestrator | changed: [testbed-node-3] 2026-04-05 00:33:45.692537 | orchestrator | changed: [testbed-node-4] 2026-04-05 00:33:45.692546 | orchestrator | changed: [testbed-node-5] 2026-04-05 00:33:45.692556 | orchestrator | 2026-04-05 00:33:45.692565 | orchestrator | TASK [osism.services.docker : Reload systemd daemon if systemd overlay file is changed] *** 2026-04-05 00:33:45.692592 | orchestrator | Sunday 05 April 2026 00:33:34 +0000 (0:00:01.474) 0:06:33.592 ********** 2026-04-05 00:33:45.692602 | orchestrator | skipping: [testbed-manager] 2026-04-05 00:33:45.692612 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:33:45.692621 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:33:45.692631 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:33:45.692640 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:33:45.692650 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:33:45.692659 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:33:45.692668 | orchestrator | 2026-04-05 00:33:45.692678 | orchestrator | TASK [osism.services.docker : Copy limits configuration file] ****************** 2026-04-05 00:33:45.692688 | orchestrator | Sunday 05 April 2026 00:33:35 +0000 (0:00:01.448) 0:06:35.041 ********** 2026-04-05 00:33:45.692698 | orchestrator | ok: [testbed-manager] 2026-04-05 00:33:45.692707 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:33:45.692717 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:33:45.692726 | orchestrator | changed: [testbed-node-4] 2026-04-05 00:33:45.692736 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:33:45.692745 | orchestrator | changed: [testbed-node-3] 2026-04-05 00:33:45.692754 | orchestrator | changed: [testbed-node-5] 2026-04-05 00:33:45.692764 | orchestrator | 2026-04-05 00:33:45.692773 | orchestrator | TASK [osism.services.docker : Copy daemon.json configuration file] ************* 2026-04-05 00:33:45.692783 | orchestrator | Sunday 05 April 2026 00:33:36 +0000 (0:00:01.347) 0:06:36.389 ********** 2026-04-05 00:33:45.692793 | orchestrator | changed: [testbed-manager] 2026-04-05 00:33:45.692802 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:33:45.692812 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:33:45.692821 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:33:45.692831 | orchestrator | changed: [testbed-node-3] 2026-04-05 00:33:45.692840 | orchestrator | changed: [testbed-node-4] 2026-04-05 00:33:45.692850 | orchestrator | changed: [testbed-node-5] 2026-04-05 00:33:45.692859 | orchestrator | 2026-04-05 00:33:45.692869 | orchestrator | TASK [osism.services.docker : Include service tasks] *************************** 2026-04-05 00:33:45.692878 | orchestrator | Sunday 05 April 2026 00:33:38 +0000 (0:00:01.657) 0:06:38.046 ********** 2026-04-05 00:33:45.692888 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/service.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-05 00:33:45.692898 | orchestrator | 2026-04-05 00:33:45.692933 | orchestrator | TASK [osism.services.docker : Reload systemd daemon] *************************** 2026-04-05 00:33:45.692954 | orchestrator | Sunday 05 April 2026 00:33:39 +0000 (0:00:00.900) 0:06:38.946 ********** 2026-04-05 00:33:45.692964 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:33:45.692973 | orchestrator | ok: [testbed-manager] 2026-04-05 00:33:45.692983 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:33:45.692993 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:33:45.693003 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:33:45.693012 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:33:45.693022 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:33:45.693032 | orchestrator | 2026-04-05 00:33:45.693041 | orchestrator | TASK [osism.services.docker : Manage service] ********************************** 2026-04-05 00:33:45.693052 | orchestrator | Sunday 05 April 2026 00:33:40 +0000 (0:00:01.364) 0:06:40.310 ********** 2026-04-05 00:33:45.693062 | orchestrator | ok: [testbed-manager] 2026-04-05 00:33:45.693072 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:33:45.693081 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:33:45.693090 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:33:45.693100 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:33:45.693109 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:33:45.693119 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:33:45.693139 | orchestrator | 2026-04-05 00:33:45.693150 | orchestrator | TASK [osism.services.docker : Manage docker socket service] ******************** 2026-04-05 00:33:45.693159 | orchestrator | Sunday 05 April 2026 00:33:42 +0000 (0:00:01.380) 0:06:41.691 ********** 2026-04-05 00:33:45.693169 | orchestrator | ok: [testbed-manager] 2026-04-05 00:33:45.693178 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:33:45.693188 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:33:45.693197 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:33:45.693207 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:33:45.693216 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:33:45.693226 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:33:45.693235 | orchestrator | 2026-04-05 00:33:45.693245 | orchestrator | TASK [osism.services.docker : Manage containerd service] *********************** 2026-04-05 00:33:45.693254 | orchestrator | Sunday 05 April 2026 00:33:43 +0000 (0:00:01.133) 0:06:42.824 ********** 2026-04-05 00:33:45.693264 | orchestrator | ok: [testbed-manager] 2026-04-05 00:33:45.693273 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:33:45.693283 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:33:45.693292 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:33:45.693302 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:33:45.693311 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:33:45.693320 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:33:45.693330 | orchestrator | 2026-04-05 00:33:45.693339 | orchestrator | TASK [osism.services.docker : Include bootstrap tasks] ************************* 2026-04-05 00:33:45.693349 | orchestrator | Sunday 05 April 2026 00:33:44 +0000 (0:00:01.127) 0:06:43.952 ********** 2026-04-05 00:33:45.693359 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/bootstrap.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-05 00:33:45.693369 | orchestrator | 2026-04-05 00:33:45.693378 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-04-05 00:33:45.693388 | orchestrator | Sunday 05 April 2026 00:33:45 +0000 (0:00:00.942) 0:06:44.895 ********** 2026-04-05 00:33:45.693397 | orchestrator | 2026-04-05 00:33:45.693407 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-04-05 00:33:45.693417 | orchestrator | Sunday 05 April 2026 00:33:45 +0000 (0:00:00.041) 0:06:44.937 ********** 2026-04-05 00:33:45.693426 | orchestrator | 2026-04-05 00:33:45.693435 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-04-05 00:33:45.693445 | orchestrator | Sunday 05 April 2026 00:33:45 +0000 (0:00:00.252) 0:06:45.190 ********** 2026-04-05 00:33:45.693455 | orchestrator | 2026-04-05 00:33:45.693464 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-04-05 00:33:45.693480 | orchestrator | Sunday 05 April 2026 00:33:45 +0000 (0:00:00.074) 0:06:45.264 ********** 2026-04-05 00:34:12.252743 | orchestrator | 2026-04-05 00:34:12.252883 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-04-05 00:34:12.252902 | orchestrator | Sunday 05 April 2026 00:33:45 +0000 (0:00:00.059) 0:06:45.324 ********** 2026-04-05 00:34:12.252914 | orchestrator | 2026-04-05 00:34:12.252925 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-04-05 00:34:12.252937 | orchestrator | Sunday 05 April 2026 00:33:45 +0000 (0:00:00.050) 0:06:45.374 ********** 2026-04-05 00:34:12.252947 | orchestrator | 2026-04-05 00:34:12.253006 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-04-05 00:34:12.253021 | orchestrator | Sunday 05 April 2026 00:33:45 +0000 (0:00:00.041) 0:06:45.415 ********** 2026-04-05 00:34:12.253032 | orchestrator | 2026-04-05 00:34:12.253043 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2026-04-05 00:34:12.253054 | orchestrator | Sunday 05 April 2026 00:33:45 +0000 (0:00:00.042) 0:06:45.458 ********** 2026-04-05 00:34:12.253065 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:34:12.253077 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:34:12.253088 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:34:12.253099 | orchestrator | 2026-04-05 00:34:12.253110 | orchestrator | RUNNING HANDLER [osism.services.rsyslog : Restart rsyslog service] ************* 2026-04-05 00:34:12.253120 | orchestrator | Sunday 05 April 2026 00:33:47 +0000 (0:00:01.277) 0:06:46.735 ********** 2026-04-05 00:34:12.253131 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:34:12.253143 | orchestrator | changed: [testbed-manager] 2026-04-05 00:34:12.253153 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:34:12.253164 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:34:12.253175 | orchestrator | changed: [testbed-node-3] 2026-04-05 00:34:12.253187 | orchestrator | changed: [testbed-node-4] 2026-04-05 00:34:12.253198 | orchestrator | changed: [testbed-node-5] 2026-04-05 00:34:12.253209 | orchestrator | 2026-04-05 00:34:12.253220 | orchestrator | RUNNING HANDLER [osism.services.rsyslog : Restart logrotate service] *********** 2026-04-05 00:34:12.253230 | orchestrator | Sunday 05 April 2026 00:33:48 +0000 (0:00:01.348) 0:06:48.084 ********** 2026-04-05 00:34:12.253241 | orchestrator | changed: [testbed-manager] 2026-04-05 00:34:12.253252 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:34:12.253263 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:34:12.253275 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:34:12.253288 | orchestrator | changed: [testbed-node-3] 2026-04-05 00:34:12.253301 | orchestrator | changed: [testbed-node-4] 2026-04-05 00:34:12.253313 | orchestrator | changed: [testbed-node-5] 2026-04-05 00:34:12.253325 | orchestrator | 2026-04-05 00:34:12.253339 | orchestrator | RUNNING HANDLER [osism.services.docker : Restart docker service] *************** 2026-04-05 00:34:12.253351 | orchestrator | Sunday 05 April 2026 00:33:49 +0000 (0:00:01.241) 0:06:49.325 ********** 2026-04-05 00:34:12.253363 | orchestrator | skipping: [testbed-manager] 2026-04-05 00:34:12.253376 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:34:12.253388 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:34:12.253400 | orchestrator | changed: [testbed-node-4] 2026-04-05 00:34:12.253419 | orchestrator | changed: [testbed-node-3] 2026-04-05 00:34:12.253438 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:34:12.253456 | orchestrator | changed: [testbed-node-5] 2026-04-05 00:34:12.253474 | orchestrator | 2026-04-05 00:34:12.253511 | orchestrator | RUNNING HANDLER [osism.services.docker : Wait after docker service restart] **** 2026-04-05 00:34:12.253532 | orchestrator | Sunday 05 April 2026 00:33:52 +0000 (0:00:02.589) 0:06:51.914 ********** 2026-04-05 00:34:12.253551 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:34:12.253570 | orchestrator | 2026-04-05 00:34:12.253584 | orchestrator | TASK [osism.services.docker : Add user to docker group] ************************ 2026-04-05 00:34:12.253596 | orchestrator | Sunday 05 April 2026 00:33:52 +0000 (0:00:00.110) 0:06:52.025 ********** 2026-04-05 00:34:12.253609 | orchestrator | ok: [testbed-manager] 2026-04-05 00:34:12.253621 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:34:12.253635 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:34:12.253647 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:34:12.253668 | orchestrator | changed: [testbed-node-4] 2026-04-05 00:34:12.253679 | orchestrator | changed: [testbed-node-3] 2026-04-05 00:34:12.253690 | orchestrator | changed: [testbed-node-5] 2026-04-05 00:34:12.253701 | orchestrator | 2026-04-05 00:34:12.253712 | orchestrator | TASK [osism.services.docker : Log into private registry and force re-authorization] *** 2026-04-05 00:34:12.253724 | orchestrator | Sunday 05 April 2026 00:33:53 +0000 (0:00:01.263) 0:06:53.288 ********** 2026-04-05 00:34:12.253734 | orchestrator | skipping: [testbed-manager] 2026-04-05 00:34:12.253745 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:34:12.253756 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:34:12.253767 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:34:12.253777 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:34:12.253788 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:34:12.253799 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:34:12.253809 | orchestrator | 2026-04-05 00:34:12.253820 | orchestrator | TASK [osism.services.docker : Include facts tasks] ***************************** 2026-04-05 00:34:12.253831 | orchestrator | Sunday 05 April 2026 00:33:54 +0000 (0:00:00.572) 0:06:53.861 ********** 2026-04-05 00:34:12.253843 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/facts.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-05 00:34:12.253857 | orchestrator | 2026-04-05 00:34:12.253868 | orchestrator | TASK [osism.services.docker : Create facts directory] ************************** 2026-04-05 00:34:12.253878 | orchestrator | Sunday 05 April 2026 00:33:55 +0000 (0:00:00.919) 0:06:54.781 ********** 2026-04-05 00:34:12.253889 | orchestrator | ok: [testbed-manager] 2026-04-05 00:34:12.253900 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:34:12.253911 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:34:12.253922 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:34:12.253933 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:34:12.253944 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:34:12.253954 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:34:12.253999 | orchestrator | 2026-04-05 00:34:12.254011 | orchestrator | TASK [osism.services.docker : Copy docker fact files] ************************** 2026-04-05 00:34:12.254089 | orchestrator | Sunday 05 April 2026 00:33:56 +0000 (0:00:01.005) 0:06:55.787 ********** 2026-04-05 00:34:12.254102 | orchestrator | ok: [testbed-manager] => (item=docker_containers) 2026-04-05 00:34:12.254132 | orchestrator | changed: [testbed-node-0] => (item=docker_containers) 2026-04-05 00:34:12.254145 | orchestrator | changed: [testbed-node-1] => (item=docker_containers) 2026-04-05 00:34:12.254156 | orchestrator | changed: [testbed-node-2] => (item=docker_containers) 2026-04-05 00:34:12.254167 | orchestrator | changed: [testbed-node-3] => (item=docker_containers) 2026-04-05 00:34:12.254177 | orchestrator | changed: [testbed-node-4] => (item=docker_containers) 2026-04-05 00:34:12.254188 | orchestrator | changed: [testbed-node-5] => (item=docker_containers) 2026-04-05 00:34:12.254199 | orchestrator | ok: [testbed-manager] => (item=docker_images) 2026-04-05 00:34:12.254210 | orchestrator | changed: [testbed-node-0] => (item=docker_images) 2026-04-05 00:34:12.254221 | orchestrator | changed: [testbed-node-1] => (item=docker_images) 2026-04-05 00:34:12.254232 | orchestrator | changed: [testbed-node-3] => (item=docker_images) 2026-04-05 00:34:12.254243 | orchestrator | changed: [testbed-node-2] => (item=docker_images) 2026-04-05 00:34:12.254253 | orchestrator | changed: [testbed-node-4] => (item=docker_images) 2026-04-05 00:34:12.254264 | orchestrator | changed: [testbed-node-5] => (item=docker_images) 2026-04-05 00:34:12.254275 | orchestrator | 2026-04-05 00:34:12.254286 | orchestrator | TASK [osism.commons.docker_compose : This install type is not supported] ******* 2026-04-05 00:34:12.254298 | orchestrator | Sunday 05 April 2026 00:33:58 +0000 (0:00:02.591) 0:06:58.378 ********** 2026-04-05 00:34:12.254309 | orchestrator | skipping: [testbed-manager] 2026-04-05 00:34:12.254320 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:34:12.254330 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:34:12.254351 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:34:12.254362 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:34:12.254373 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:34:12.254383 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:34:12.254394 | orchestrator | 2026-04-05 00:34:12.254406 | orchestrator | TASK [osism.commons.docker_compose : Include distribution specific install tasks] *** 2026-04-05 00:34:12.254417 | orchestrator | Sunday 05 April 2026 00:33:59 +0000 (0:00:00.537) 0:06:58.915 ********** 2026-04-05 00:34:12.254430 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/docker_compose/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-05 00:34:12.254443 | orchestrator | 2026-04-05 00:34:12.254454 | orchestrator | TASK [osism.commons.docker_compose : Remove docker-compose apt preferences file] *** 2026-04-05 00:34:12.254465 | orchestrator | Sunday 05 April 2026 00:34:00 +0000 (0:00:00.990) 0:06:59.905 ********** 2026-04-05 00:34:12.254483 | orchestrator | ok: [testbed-manager] 2026-04-05 00:34:12.254504 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:34:12.254523 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:34:12.254545 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:34:12.254566 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:34:12.254587 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:34:12.254609 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:34:12.254621 | orchestrator | 2026-04-05 00:34:12.254632 | orchestrator | TASK [osism.commons.docker_compose : Get checksum of docker-compose file] ****** 2026-04-05 00:34:12.254643 | orchestrator | Sunday 05 April 2026 00:34:01 +0000 (0:00:00.849) 0:07:00.755 ********** 2026-04-05 00:34:12.254654 | orchestrator | ok: [testbed-manager] 2026-04-05 00:34:12.254664 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:34:12.254675 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:34:12.254686 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:34:12.254696 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:34:12.254707 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:34:12.254718 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:34:12.254728 | orchestrator | 2026-04-05 00:34:12.254739 | orchestrator | TASK [osism.commons.docker_compose : Remove docker-compose binary] ************* 2026-04-05 00:34:12.254750 | orchestrator | Sunday 05 April 2026 00:34:01 +0000 (0:00:00.824) 0:07:01.579 ********** 2026-04-05 00:34:12.254761 | orchestrator | skipping: [testbed-manager] 2026-04-05 00:34:12.254772 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:34:12.254783 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:34:12.254794 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:34:12.254805 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:34:12.254815 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:34:12.254826 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:34:12.254837 | orchestrator | 2026-04-05 00:34:12.254848 | orchestrator | TASK [osism.commons.docker_compose : Uninstall docker-compose package] ********* 2026-04-05 00:34:12.254859 | orchestrator | Sunday 05 April 2026 00:34:02 +0000 (0:00:00.497) 0:07:02.077 ********** 2026-04-05 00:34:12.254869 | orchestrator | ok: [testbed-manager] 2026-04-05 00:34:12.254880 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:34:12.254891 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:34:12.254902 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:34:12.254913 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:34:12.254923 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:34:12.254934 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:34:12.254945 | orchestrator | 2026-04-05 00:34:12.254955 | orchestrator | TASK [osism.commons.docker_compose : Copy docker-compose script] *************** 2026-04-05 00:34:12.255022 | orchestrator | Sunday 05 April 2026 00:34:03 +0000 (0:00:01.446) 0:07:03.523 ********** 2026-04-05 00:34:12.255034 | orchestrator | skipping: [testbed-manager] 2026-04-05 00:34:12.255045 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:34:12.255057 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:34:12.255068 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:34:12.255087 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:34:12.255097 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:34:12.255108 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:34:12.255119 | orchestrator | 2026-04-05 00:34:12.255135 | orchestrator | TASK [osism.commons.docker_compose : Install docker-compose-plugin package] **** 2026-04-05 00:34:12.255155 | orchestrator | Sunday 05 April 2026 00:34:04 +0000 (0:00:00.704) 0:07:04.228 ********** 2026-04-05 00:34:12.255175 | orchestrator | ok: [testbed-manager] 2026-04-05 00:34:12.255194 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:34:12.255214 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:34:12.255235 | orchestrator | changed: [testbed-node-3] 2026-04-05 00:34:12.255255 | orchestrator | changed: [testbed-node-5] 2026-04-05 00:34:12.255275 | orchestrator | changed: [testbed-node-4] 2026-04-05 00:34:12.255302 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:34:45.898278 | orchestrator | 2026-04-05 00:34:45.898424 | orchestrator | TASK [osism.commons.docker_compose : Copy osism.target systemd file] *********** 2026-04-05 00:34:45.898450 | orchestrator | Sunday 05 April 2026 00:34:12 +0000 (0:00:07.673) 0:07:11.901 ********** 2026-04-05 00:34:45.898471 | orchestrator | ok: [testbed-manager] 2026-04-05 00:34:45.898491 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:34:45.898511 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:34:45.898529 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:34:45.898548 | orchestrator | changed: [testbed-node-3] 2026-04-05 00:34:45.898566 | orchestrator | changed: [testbed-node-5] 2026-04-05 00:34:45.898584 | orchestrator | changed: [testbed-node-4] 2026-04-05 00:34:45.898602 | orchestrator | 2026-04-05 00:34:45.898620 | orchestrator | TASK [osism.commons.docker_compose : Enable osism.target] ********************** 2026-04-05 00:34:45.898640 | orchestrator | Sunday 05 April 2026 00:34:13 +0000 (0:00:01.339) 0:07:13.240 ********** 2026-04-05 00:34:45.898661 | orchestrator | ok: [testbed-manager] 2026-04-05 00:34:45.898681 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:34:45.898701 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:34:45.898719 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:34:45.898739 | orchestrator | changed: [testbed-node-3] 2026-04-05 00:34:45.898758 | orchestrator | changed: [testbed-node-4] 2026-04-05 00:34:45.898776 | orchestrator | changed: [testbed-node-5] 2026-04-05 00:34:45.898795 | orchestrator | 2026-04-05 00:34:45.898813 | orchestrator | TASK [osism.commons.docker_compose : Copy docker-compose systemd unit file] **** 2026-04-05 00:34:45.898832 | orchestrator | Sunday 05 April 2026 00:34:15 +0000 (0:00:01.859) 0:07:15.099 ********** 2026-04-05 00:34:45.898850 | orchestrator | ok: [testbed-manager] 2026-04-05 00:34:45.898871 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:34:45.898890 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:34:45.898910 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:34:45.898928 | orchestrator | changed: [testbed-node-3] 2026-04-05 00:34:45.898947 | orchestrator | changed: [testbed-node-4] 2026-04-05 00:34:45.898966 | orchestrator | changed: [testbed-node-5] 2026-04-05 00:34:45.898986 | orchestrator | 2026-04-05 00:34:45.899006 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2026-04-05 00:34:45.899059 | orchestrator | Sunday 05 April 2026 00:34:17 +0000 (0:00:01.887) 0:07:16.987 ********** 2026-04-05 00:34:45.899169 | orchestrator | ok: [testbed-manager] 2026-04-05 00:34:45.899192 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:34:45.899229 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:34:45.899249 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:34:45.899268 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:34:45.899287 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:34:45.899306 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:34:45.899323 | orchestrator | 2026-04-05 00:34:45.899343 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2026-04-05 00:34:45.899362 | orchestrator | Sunday 05 April 2026 00:34:18 +0000 (0:00:00.858) 0:07:17.845 ********** 2026-04-05 00:34:45.899382 | orchestrator | skipping: [testbed-manager] 2026-04-05 00:34:45.899400 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:34:45.899454 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:34:45.899474 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:34:45.899493 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:34:45.899509 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:34:45.899527 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:34:45.899545 | orchestrator | 2026-04-05 00:34:45.899563 | orchestrator | TASK [osism.services.chrony : Check minimum and maximum number of servers] ***** 2026-04-05 00:34:45.899581 | orchestrator | Sunday 05 April 2026 00:34:19 +0000 (0:00:00.952) 0:07:18.798 ********** 2026-04-05 00:34:45.899598 | orchestrator | skipping: [testbed-manager] 2026-04-05 00:34:45.899616 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:34:45.899634 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:34:45.899652 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:34:45.899671 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:34:45.899689 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:34:45.899707 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:34:45.899725 | orchestrator | 2026-04-05 00:34:45.899745 | orchestrator | TASK [osism.services.chrony : Gather variables for each operating system] ****** 2026-04-05 00:34:45.899765 | orchestrator | Sunday 05 April 2026 00:34:20 +0000 (0:00:00.797) 0:07:19.595 ********** 2026-04-05 00:34:45.899783 | orchestrator | ok: [testbed-manager] 2026-04-05 00:34:45.899801 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:34:45.899820 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:34:45.899838 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:34:45.899856 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:34:45.899873 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:34:45.899891 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:34:45.899908 | orchestrator | 2026-04-05 00:34:45.899926 | orchestrator | TASK [osism.services.chrony : Set chrony_conf_file variable to default value] *** 2026-04-05 00:34:45.899944 | orchestrator | Sunday 05 April 2026 00:34:20 +0000 (0:00:00.540) 0:07:20.136 ********** 2026-04-05 00:34:45.899963 | orchestrator | ok: [testbed-manager] 2026-04-05 00:34:45.899981 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:34:45.900000 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:34:45.900082 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:34:45.900107 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:34:45.900128 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:34:45.900145 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:34:45.900163 | orchestrator | 2026-04-05 00:34:45.900182 | orchestrator | TASK [osism.services.chrony : Set chrony_key_file variable to default value] *** 2026-04-05 00:34:45.900200 | orchestrator | Sunday 05 April 2026 00:34:21 +0000 (0:00:00.574) 0:07:20.711 ********** 2026-04-05 00:34:45.900219 | orchestrator | ok: [testbed-manager] 2026-04-05 00:34:45.900239 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:34:45.900258 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:34:45.900277 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:34:45.900296 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:34:45.900316 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:34:45.900335 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:34:45.900354 | orchestrator | 2026-04-05 00:34:45.900372 | orchestrator | TASK [osism.services.chrony : Populate service facts] ************************** 2026-04-05 00:34:45.900392 | orchestrator | Sunday 05 April 2026 00:34:21 +0000 (0:00:00.556) 0:07:21.267 ********** 2026-04-05 00:34:45.900413 | orchestrator | ok: [testbed-manager] 2026-04-05 00:34:45.900433 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:34:45.900454 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:34:45.900473 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:34:45.900492 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:34:45.900512 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:34:45.900532 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:34:45.900552 | orchestrator | 2026-04-05 00:34:45.900625 | orchestrator | TASK [osism.services.chrony : Manage timesyncd service] ************************ 2026-04-05 00:34:45.900666 | orchestrator | Sunday 05 April 2026 00:34:27 +0000 (0:00:05.732) 0:07:26.999 ********** 2026-04-05 00:34:45.900686 | orchestrator | skipping: [testbed-manager] 2026-04-05 00:34:45.900725 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:34:45.900744 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:34:45.900761 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:34:45.900779 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:34:45.900797 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:34:45.900816 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:34:45.900834 | orchestrator | 2026-04-05 00:34:45.900852 | orchestrator | TASK [osism.services.chrony : Include distribution specific install tasks] ***** 2026-04-05 00:34:45.900871 | orchestrator | Sunday 05 April 2026 00:34:28 +0000 (0:00:00.807) 0:07:27.806 ********** 2026-04-05 00:34:45.900892 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-05 00:34:45.900914 | orchestrator | 2026-04-05 00:34:45.900932 | orchestrator | TASK [osism.services.chrony : Install package] ********************************* 2026-04-05 00:34:45.900951 | orchestrator | Sunday 05 April 2026 00:34:29 +0000 (0:00:00.939) 0:07:28.746 ********** 2026-04-05 00:34:45.900994 | orchestrator | ok: [testbed-manager] 2026-04-05 00:34:45.901012 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:34:45.901111 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:34:45.901129 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:34:45.901147 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:34:45.901165 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:34:45.901183 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:34:45.901201 | orchestrator | 2026-04-05 00:34:45.901219 | orchestrator | TASK [osism.services.chrony : Manage chrony service] *************************** 2026-04-05 00:34:45.901237 | orchestrator | Sunday 05 April 2026 00:34:31 +0000 (0:00:01.895) 0:07:30.641 ********** 2026-04-05 00:34:45.901257 | orchestrator | ok: [testbed-manager] 2026-04-05 00:34:45.901274 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:34:45.901292 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:34:45.901312 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:34:45.901330 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:34:45.901348 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:34:45.901366 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:34:45.901384 | orchestrator | 2026-04-05 00:34:45.901402 | orchestrator | TASK [osism.services.chrony : Check if configuration file exists] ************** 2026-04-05 00:34:45.901420 | orchestrator | Sunday 05 April 2026 00:34:32 +0000 (0:00:01.383) 0:07:32.025 ********** 2026-04-05 00:34:45.901438 | orchestrator | ok: [testbed-manager] 2026-04-05 00:34:45.901456 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:34:45.901474 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:34:45.901493 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:34:45.901511 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:34:45.901529 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:34:45.901546 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:34:45.901564 | orchestrator | 2026-04-05 00:34:45.901592 | orchestrator | TASK [osism.services.chrony : Copy configuration file] ************************* 2026-04-05 00:34:45.901611 | orchestrator | Sunday 05 April 2026 00:34:33 +0000 (0:00:00.835) 0:07:32.861 ********** 2026-04-05 00:34:45.901630 | orchestrator | changed: [testbed-manager] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-04-05 00:34:45.901650 | orchestrator | changed: [testbed-node-0] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-04-05 00:34:45.901669 | orchestrator | changed: [testbed-node-1] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-04-05 00:34:45.901687 | orchestrator | changed: [testbed-node-2] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-04-05 00:34:45.901705 | orchestrator | changed: [testbed-node-3] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-04-05 00:34:45.901737 | orchestrator | changed: [testbed-node-4] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-04-05 00:34:45.901755 | orchestrator | changed: [testbed-node-5] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-04-05 00:34:45.901773 | orchestrator | 2026-04-05 00:34:45.901791 | orchestrator | TASK [osism.services.lldpd : Include distribution specific install tasks] ****** 2026-04-05 00:34:45.901811 | orchestrator | Sunday 05 April 2026 00:34:35 +0000 (0:00:01.745) 0:07:34.606 ********** 2026-04-05 00:34:45.901831 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/lldpd/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-05 00:34:45.901850 | orchestrator | 2026-04-05 00:34:45.901867 | orchestrator | TASK [osism.services.lldpd : Install lldpd package] **************************** 2026-04-05 00:34:45.901887 | orchestrator | Sunday 05 April 2026 00:34:36 +0000 (0:00:00.987) 0:07:35.594 ********** 2026-04-05 00:34:45.901906 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:34:45.901922 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:34:45.901940 | orchestrator | changed: [testbed-manager] 2026-04-05 00:34:45.901958 | orchestrator | changed: [testbed-node-5] 2026-04-05 00:34:45.901976 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:34:45.901993 | orchestrator | changed: [testbed-node-4] 2026-04-05 00:34:45.902011 | orchestrator | changed: [testbed-node-3] 2026-04-05 00:34:45.902112 | orchestrator | 2026-04-05 00:34:45.902152 | orchestrator | TASK [osism.services.lldpd : Manage lldpd service] ***************************** 2026-04-05 00:35:16.126498 | orchestrator | Sunday 05 April 2026 00:34:45 +0000 (0:00:09.882) 0:07:45.476 ********** 2026-04-05 00:35:16.126614 | orchestrator | ok: [testbed-manager] 2026-04-05 00:35:16.126632 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:35:16.126644 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:35:16.126655 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:35:16.126666 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:35:16.126677 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:35:16.126688 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:35:16.126699 | orchestrator | 2026-04-05 00:35:16.126711 | orchestrator | RUNNING HANDLER [osism.commons.docker_compose : Reload systemd daemon] ********* 2026-04-05 00:35:16.126723 | orchestrator | Sunday 05 April 2026 00:34:47 +0000 (0:00:01.878) 0:07:47.354 ********** 2026-04-05 00:35:16.126734 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:35:16.126745 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:35:16.126756 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:35:16.126766 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:35:16.126778 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:35:16.126788 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:35:16.126799 | orchestrator | 2026-04-05 00:35:16.126810 | orchestrator | RUNNING HANDLER [osism.services.chrony : Restart chrony service] *************** 2026-04-05 00:35:16.126821 | orchestrator | Sunday 05 April 2026 00:34:49 +0000 (0:00:01.478) 0:07:48.833 ********** 2026-04-05 00:35:16.126832 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:35:16.126844 | orchestrator | changed: [testbed-manager] 2026-04-05 00:35:16.126855 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:35:16.126866 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:35:16.126877 | orchestrator | changed: [testbed-node-3] 2026-04-05 00:35:16.126888 | orchestrator | changed: [testbed-node-4] 2026-04-05 00:35:16.126898 | orchestrator | changed: [testbed-node-5] 2026-04-05 00:35:16.126909 | orchestrator | 2026-04-05 00:35:16.126920 | orchestrator | PLAY [Apply bootstrap role part 2] ********************************************* 2026-04-05 00:35:16.126931 | orchestrator | 2026-04-05 00:35:16.126942 | orchestrator | TASK [Include hardening role] ************************************************** 2026-04-05 00:35:16.126953 | orchestrator | Sunday 05 April 2026 00:34:50 +0000 (0:00:01.291) 0:07:50.125 ********** 2026-04-05 00:35:16.126964 | orchestrator | skipping: [testbed-manager] 2026-04-05 00:35:16.126998 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:35:16.127010 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:35:16.127022 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:35:16.127035 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:35:16.127047 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:35:16.127082 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:35:16.127095 | orchestrator | 2026-04-05 00:35:16.127108 | orchestrator | PLAY [Apply bootstrap roles part 3] ******************************************** 2026-04-05 00:35:16.127121 | orchestrator | 2026-04-05 00:35:16.127134 | orchestrator | TASK [osism.services.journald : Copy configuration file] *********************** 2026-04-05 00:35:16.127146 | orchestrator | Sunday 05 April 2026 00:34:51 +0000 (0:00:00.526) 0:07:50.652 ********** 2026-04-05 00:35:16.127158 | orchestrator | changed: [testbed-manager] 2026-04-05 00:35:16.127171 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:35:16.127189 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:35:16.127209 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:35:16.127245 | orchestrator | changed: [testbed-node-3] 2026-04-05 00:35:16.127266 | orchestrator | changed: [testbed-node-4] 2026-04-05 00:35:16.127287 | orchestrator | changed: [testbed-node-5] 2026-04-05 00:35:16.127306 | orchestrator | 2026-04-05 00:35:16.127318 | orchestrator | TASK [osism.services.journald : Manage journald service] *********************** 2026-04-05 00:35:16.127329 | orchestrator | Sunday 05 April 2026 00:34:52 +0000 (0:00:01.352) 0:07:52.005 ********** 2026-04-05 00:35:16.127339 | orchestrator | ok: [testbed-manager] 2026-04-05 00:35:16.127350 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:35:16.127361 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:35:16.127372 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:35:16.127382 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:35:16.127393 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:35:16.127404 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:35:16.127414 | orchestrator | 2026-04-05 00:35:16.127425 | orchestrator | TASK [Include auditd role] ***************************************************** 2026-04-05 00:35:16.127436 | orchestrator | Sunday 05 April 2026 00:34:54 +0000 (0:00:01.736) 0:07:53.741 ********** 2026-04-05 00:35:16.127447 | orchestrator | skipping: [testbed-manager] 2026-04-05 00:35:16.127458 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:35:16.127469 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:35:16.127479 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:35:16.127490 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:35:16.127501 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:35:16.127512 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:35:16.127523 | orchestrator | 2026-04-05 00:35:16.127533 | orchestrator | TASK [Include smartd role] ***************************************************** 2026-04-05 00:35:16.127545 | orchestrator | Sunday 05 April 2026 00:34:54 +0000 (0:00:00.560) 0:07:54.301 ********** 2026-04-05 00:35:16.127556 | orchestrator | included: osism.services.smartd for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-05 00:35:16.127569 | orchestrator | 2026-04-05 00:35:16.127580 | orchestrator | TASK [osism.services.smartd : Include distribution specific install tasks] ***** 2026-04-05 00:35:16.127590 | orchestrator | Sunday 05 April 2026 00:34:55 +0000 (0:00:00.879) 0:07:55.181 ********** 2026-04-05 00:35:16.127604 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/smartd/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-05 00:35:16.127618 | orchestrator | 2026-04-05 00:35:16.127629 | orchestrator | TASK [osism.services.smartd : Install smartmontools package] ******************* 2026-04-05 00:35:16.127640 | orchestrator | Sunday 05 April 2026 00:34:56 +0000 (0:00:01.048) 0:07:56.230 ********** 2026-04-05 00:35:16.127650 | orchestrator | changed: [testbed-manager] 2026-04-05 00:35:16.127661 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:35:16.127672 | orchestrator | changed: [testbed-node-4] 2026-04-05 00:35:16.127683 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:35:16.127703 | orchestrator | changed: [testbed-node-3] 2026-04-05 00:35:16.127714 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:35:16.127725 | orchestrator | changed: [testbed-node-5] 2026-04-05 00:35:16.127736 | orchestrator | 2026-04-05 00:35:16.127764 | orchestrator | TASK [osism.services.smartd : Create /var/log/smartd directory] **************** 2026-04-05 00:35:16.127776 | orchestrator | Sunday 05 April 2026 00:35:05 +0000 (0:00:08.840) 0:08:05.070 ********** 2026-04-05 00:35:16.127787 | orchestrator | changed: [testbed-manager] 2026-04-05 00:35:16.127798 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:35:16.127809 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:35:16.127819 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:35:16.127830 | orchestrator | changed: [testbed-node-3] 2026-04-05 00:35:16.127841 | orchestrator | changed: [testbed-node-4] 2026-04-05 00:35:16.127852 | orchestrator | changed: [testbed-node-5] 2026-04-05 00:35:16.127862 | orchestrator | 2026-04-05 00:35:16.127873 | orchestrator | TASK [osism.services.smartd : Copy smartmontools configuration file] *********** 2026-04-05 00:35:16.127884 | orchestrator | Sunday 05 April 2026 00:35:06 +0000 (0:00:00.765) 0:08:05.836 ********** 2026-04-05 00:35:16.127895 | orchestrator | changed: [testbed-manager] 2026-04-05 00:35:16.127906 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:35:16.127917 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:35:16.127928 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:35:16.127938 | orchestrator | changed: [testbed-node-3] 2026-04-05 00:35:16.127949 | orchestrator | changed: [testbed-node-4] 2026-04-05 00:35:16.127959 | orchestrator | changed: [testbed-node-5] 2026-04-05 00:35:16.127970 | orchestrator | 2026-04-05 00:35:16.127981 | orchestrator | TASK [osism.services.smartd : Manage smartd service] *************************** 2026-04-05 00:35:16.127992 | orchestrator | Sunday 05 April 2026 00:35:07 +0000 (0:00:01.274) 0:08:07.111 ********** 2026-04-05 00:35:16.128003 | orchestrator | changed: [testbed-manager] 2026-04-05 00:35:16.128014 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:35:16.128024 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:35:16.128035 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:35:16.128046 | orchestrator | changed: [testbed-node-3] 2026-04-05 00:35:16.128056 | orchestrator | changed: [testbed-node-4] 2026-04-05 00:35:16.128089 | orchestrator | changed: [testbed-node-5] 2026-04-05 00:35:16.128100 | orchestrator | 2026-04-05 00:35:16.128111 | orchestrator | RUNNING HANDLER [osism.services.journald : Restart journald service] *********** 2026-04-05 00:35:16.128123 | orchestrator | Sunday 05 April 2026 00:35:09 +0000 (0:00:01.799) 0:08:08.911 ********** 2026-04-05 00:35:16.128134 | orchestrator | changed: [testbed-manager] 2026-04-05 00:35:16.128144 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:35:16.128155 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:35:16.128165 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:35:16.128176 | orchestrator | changed: [testbed-node-4] 2026-04-05 00:35:16.128187 | orchestrator | changed: [testbed-node-3] 2026-04-05 00:35:16.128198 | orchestrator | changed: [testbed-node-5] 2026-04-05 00:35:16.128208 | orchestrator | 2026-04-05 00:35:16.128219 | orchestrator | RUNNING HANDLER [osism.services.smartd : Restart smartd service] *************** 2026-04-05 00:35:16.128230 | orchestrator | Sunday 05 April 2026 00:35:10 +0000 (0:00:01.184) 0:08:10.095 ********** 2026-04-05 00:35:16.128241 | orchestrator | changed: [testbed-manager] 2026-04-05 00:35:16.128252 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:35:16.128263 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:35:16.128274 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:35:16.128284 | orchestrator | changed: [testbed-node-3] 2026-04-05 00:35:16.128301 | orchestrator | changed: [testbed-node-4] 2026-04-05 00:35:16.128312 | orchestrator | changed: [testbed-node-5] 2026-04-05 00:35:16.128323 | orchestrator | 2026-04-05 00:35:16.128333 | orchestrator | PLAY [Set state bootstrap] ***************************************************** 2026-04-05 00:35:16.128344 | orchestrator | 2026-04-05 00:35:16.128355 | orchestrator | TASK [Set osism.bootstrap.status fact] ***************************************** 2026-04-05 00:35:16.128366 | orchestrator | Sunday 05 April 2026 00:35:11 +0000 (0:00:01.033) 0:08:11.129 ********** 2026-04-05 00:35:16.128385 | orchestrator | included: osism.commons.state for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-05 00:35:16.128396 | orchestrator | 2026-04-05 00:35:16.128407 | orchestrator | TASK [osism.commons.state : Create custom facts directory] ********************* 2026-04-05 00:35:16.128417 | orchestrator | Sunday 05 April 2026 00:35:12 +0000 (0:00:00.901) 0:08:12.030 ********** 2026-04-05 00:35:16.128428 | orchestrator | ok: [testbed-manager] 2026-04-05 00:35:16.128439 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:35:16.128450 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:35:16.128461 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:35:16.128472 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:35:16.128483 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:35:16.128493 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:35:16.128504 | orchestrator | 2026-04-05 00:35:16.128515 | orchestrator | TASK [osism.commons.state : Write state into file] ***************************** 2026-04-05 00:35:16.128526 | orchestrator | Sunday 05 April 2026 00:35:13 +0000 (0:00:00.774) 0:08:12.805 ********** 2026-04-05 00:35:16.128537 | orchestrator | changed: [testbed-manager] 2026-04-05 00:35:16.128548 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:35:16.128558 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:35:16.128569 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:35:16.128580 | orchestrator | changed: [testbed-node-3] 2026-04-05 00:35:16.128591 | orchestrator | changed: [testbed-node-4] 2026-04-05 00:35:16.128601 | orchestrator | changed: [testbed-node-5] 2026-04-05 00:35:16.128612 | orchestrator | 2026-04-05 00:35:16.128623 | orchestrator | TASK [Set osism.bootstrap.timestamp fact] ************************************** 2026-04-05 00:35:16.128634 | orchestrator | Sunday 05 April 2026 00:35:14 +0000 (0:00:01.151) 0:08:13.956 ********** 2026-04-05 00:35:16.128645 | orchestrator | included: osism.commons.state for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-05 00:35:16.128656 | orchestrator | 2026-04-05 00:35:16.128667 | orchestrator | TASK [osism.commons.state : Create custom facts directory] ********************* 2026-04-05 00:35:16.128677 | orchestrator | Sunday 05 April 2026 00:35:15 +0000 (0:00:00.797) 0:08:14.753 ********** 2026-04-05 00:35:16.128688 | orchestrator | ok: [testbed-manager] 2026-04-05 00:35:16.128699 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:35:16.128710 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:35:16.128721 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:35:16.128731 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:35:16.128742 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:35:16.128753 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:35:16.128763 | orchestrator | 2026-04-05 00:35:16.128781 | orchestrator | TASK [osism.commons.state : Write state into file] ***************************** 2026-04-05 00:35:17.743510 | orchestrator | Sunday 05 April 2026 00:35:16 +0000 (0:00:00.949) 0:08:15.703 ********** 2026-04-05 00:35:17.743593 | orchestrator | changed: [testbed-manager] 2026-04-05 00:35:17.743605 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:35:17.743612 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:35:17.743619 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:35:17.743626 | orchestrator | changed: [testbed-node-3] 2026-04-05 00:35:17.743633 | orchestrator | changed: [testbed-node-4] 2026-04-05 00:35:17.743640 | orchestrator | changed: [testbed-node-5] 2026-04-05 00:35:17.743647 | orchestrator | 2026-04-05 00:35:17.743654 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-05 00:35:17.743663 | orchestrator | testbed-manager : ok=168  changed=40  unreachable=0 failed=0 skipped=42  rescued=0 ignored=0 2026-04-05 00:35:17.743672 | orchestrator | testbed-node-0 : ok=177  changed=69  unreachable=0 failed=0 skipped=37  rescued=0 ignored=0 2026-04-05 00:35:17.743679 | orchestrator | testbed-node-1 : ok=177  changed=69  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2026-04-05 00:35:17.743706 | orchestrator | testbed-node-2 : ok=177  changed=69  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2026-04-05 00:35:17.743714 | orchestrator | testbed-node-3 : ok=175  changed=65  unreachable=0 failed=0 skipped=37  rescued=0 ignored=0 2026-04-05 00:35:17.743721 | orchestrator | testbed-node-4 : ok=175  changed=65  unreachable=0 failed=0 skipped=37  rescued=0 ignored=0 2026-04-05 00:35:17.743727 | orchestrator | testbed-node-5 : ok=175  changed=65  unreachable=0 failed=0 skipped=37  rescued=0 ignored=0 2026-04-05 00:35:17.743734 | orchestrator | 2026-04-05 00:35:17.743741 | orchestrator | 2026-04-05 00:35:17.743748 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-05 00:35:17.743755 | orchestrator | Sunday 05 April 2026 00:35:17 +0000 (0:00:01.280) 0:08:16.983 ********** 2026-04-05 00:35:17.743761 | orchestrator | =============================================================================== 2026-04-05 00:35:17.743768 | orchestrator | osism.commons.packages : Install required packages --------------------- 79.81s 2026-04-05 00:35:17.743775 | orchestrator | osism.commons.packages : Download required packages -------------------- 39.99s 2026-04-05 00:35:17.743794 | orchestrator | osism.commons.cleanup : Cleanup installed packages --------------------- 35.33s 2026-04-05 00:35:17.743801 | orchestrator | osism.commons.repository : Update package cache ------------------------ 15.86s 2026-04-05 00:35:17.743808 | orchestrator | osism.commons.packages : Remove dependencies that are no longer required -- 13.92s 2026-04-05 00:35:17.743815 | orchestrator | osism.commons.systohc : Install util-linux-extra package --------------- 12.18s 2026-04-05 00:35:17.743822 | orchestrator | osism.services.docker : Install docker package ------------------------- 11.28s 2026-04-05 00:35:17.743828 | orchestrator | osism.services.lldpd : Install lldpd package ---------------------------- 9.88s 2026-04-05 00:35:17.743835 | orchestrator | osism.services.docker : Install containerd package ---------------------- 9.48s 2026-04-05 00:35:17.743842 | orchestrator | osism.services.docker : Install docker-cli package ---------------------- 9.45s 2026-04-05 00:35:17.743848 | orchestrator | osism.services.smartd : Install smartmontools package ------------------- 8.84s 2026-04-05 00:35:17.743855 | orchestrator | osism.services.rng : Install rng package -------------------------------- 8.61s 2026-04-05 00:35:17.743862 | orchestrator | osism.commons.cleanup : Remove cloudinit package ------------------------ 8.25s 2026-04-05 00:35:17.743869 | orchestrator | osism.services.docker : Add repository ---------------------------------- 8.09s 2026-04-05 00:35:17.743875 | orchestrator | osism.commons.docker_compose : Install docker-compose-plugin package ---- 7.67s 2026-04-05 00:35:17.743882 | orchestrator | osism.commons.cleanup : Uninstall unattended-upgrades package ----------- 7.57s 2026-04-05 00:35:17.743889 | orchestrator | osism.services.docker : Install apt-transport-https package ------------- 6.54s 2026-04-05 00:35:17.743895 | orchestrator | osism.commons.cleanup : Remove dependencies that are no longer required --- 5.74s 2026-04-05 00:35:17.743902 | orchestrator | osism.services.chrony : Populate service facts -------------------------- 5.73s 2026-04-05 00:35:17.743909 | orchestrator | osism.commons.cleanup : Populate service facts -------------------------- 5.63s 2026-04-05 00:35:17.945968 | orchestrator | + osism apply fail2ban 2026-04-05 00:35:29.787112 | orchestrator | 2026-04-05 00:35:29 | INFO  | Prepare task for execution of fail2ban. 2026-04-05 00:35:29.883649 | orchestrator | 2026-04-05 00:35:29 | INFO  | Task ccb58c9e-ce08-4f8a-9f65-b7dc81591f67 (fail2ban) was prepared for execution. 2026-04-05 00:35:29.883747 | orchestrator | 2026-04-05 00:35:29 | INFO  | It takes a moment until task ccb58c9e-ce08-4f8a-9f65-b7dc81591f67 (fail2ban) has been started and output is visible here. 2026-04-05 00:35:51.725133 | orchestrator | 2026-04-05 00:35:51.725242 | orchestrator | PLAY [Apply role fail2ban] ***************************************************** 2026-04-05 00:35:51.725281 | orchestrator | 2026-04-05 00:35:51.725293 | orchestrator | TASK [osism.services.fail2ban : Include distribution specific install tasks] *** 2026-04-05 00:35:51.725304 | orchestrator | Sunday 05 April 2026 00:35:33 +0000 (0:00:00.386) 0:00:00.386 ********** 2026-04-05 00:35:51.725316 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/fail2ban/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-05 00:35:51.725328 | orchestrator | 2026-04-05 00:35:51.725338 | orchestrator | TASK [osism.services.fail2ban : Install fail2ban package] ********************** 2026-04-05 00:35:51.725348 | orchestrator | Sunday 05 April 2026 00:35:35 +0000 (0:00:01.245) 0:00:01.631 ********** 2026-04-05 00:35:51.725358 | orchestrator | changed: [testbed-node-4] 2026-04-05 00:35:51.725369 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:35:51.725378 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:35:51.725388 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:35:51.725397 | orchestrator | changed: [testbed-node-3] 2026-04-05 00:35:51.725406 | orchestrator | changed: [testbed-node-5] 2026-04-05 00:35:51.725416 | orchestrator | changed: [testbed-manager] 2026-04-05 00:35:51.725425 | orchestrator | 2026-04-05 00:35:51.725435 | orchestrator | TASK [osism.services.fail2ban : Copy configuration files] ********************** 2026-04-05 00:35:51.725445 | orchestrator | Sunday 05 April 2026 00:35:46 +0000 (0:00:11.613) 0:00:13.245 ********** 2026-04-05 00:35:51.725454 | orchestrator | changed: [testbed-manager] 2026-04-05 00:35:51.725463 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:35:51.725473 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:35:51.725482 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:35:51.725491 | orchestrator | changed: [testbed-node-4] 2026-04-05 00:35:51.725501 | orchestrator | changed: [testbed-node-3] 2026-04-05 00:35:51.725510 | orchestrator | changed: [testbed-node-5] 2026-04-05 00:35:51.725519 | orchestrator | 2026-04-05 00:35:51.725529 | orchestrator | TASK [osism.services.fail2ban : Manage fail2ban service] *********************** 2026-04-05 00:35:51.725539 | orchestrator | Sunday 05 April 2026 00:35:48 +0000 (0:00:01.783) 0:00:15.028 ********** 2026-04-05 00:35:51.725548 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:35:51.725558 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:35:51.725568 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:35:51.725577 | orchestrator | ok: [testbed-manager] 2026-04-05 00:35:51.725586 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:35:51.725610 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:35:51.725620 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:35:51.725640 | orchestrator | 2026-04-05 00:35:51.725652 | orchestrator | TASK [osism.services.fail2ban : Reload fail2ban configuration] ***************** 2026-04-05 00:35:51.725663 | orchestrator | Sunday 05 April 2026 00:35:49 +0000 (0:00:01.326) 0:00:16.354 ********** 2026-04-05 00:35:51.725674 | orchestrator | changed: [testbed-manager] 2026-04-05 00:35:51.725686 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:35:51.725698 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:35:51.725709 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:35:51.725720 | orchestrator | changed: [testbed-node-3] 2026-04-05 00:35:51.725731 | orchestrator | changed: [testbed-node-4] 2026-04-05 00:35:51.725742 | orchestrator | changed: [testbed-node-5] 2026-04-05 00:35:51.725753 | orchestrator | 2026-04-05 00:35:51.725764 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-05 00:35:51.725790 | orchestrator | testbed-manager : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-05 00:35:51.725802 | orchestrator | testbed-node-0 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-05 00:35:51.725815 | orchestrator | testbed-node-1 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-05 00:35:51.725827 | orchestrator | testbed-node-2 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-05 00:35:51.725846 | orchestrator | testbed-node-3 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-05 00:35:51.725857 | orchestrator | testbed-node-4 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-05 00:35:51.725869 | orchestrator | testbed-node-5 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-05 00:35:51.725880 | orchestrator | 2026-04-05 00:35:51.725891 | orchestrator | 2026-04-05 00:35:51.725902 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-05 00:35:51.725914 | orchestrator | Sunday 05 April 2026 00:35:51 +0000 (0:00:01.631) 0:00:17.986 ********** 2026-04-05 00:35:51.725924 | orchestrator | =============================================================================== 2026-04-05 00:35:51.725936 | orchestrator | osism.services.fail2ban : Install fail2ban package --------------------- 11.61s 2026-04-05 00:35:51.725946 | orchestrator | osism.services.fail2ban : Copy configuration files ---------------------- 1.78s 2026-04-05 00:35:51.725958 | orchestrator | osism.services.fail2ban : Reload fail2ban configuration ----------------- 1.63s 2026-04-05 00:35:51.725968 | orchestrator | osism.services.fail2ban : Manage fail2ban service ----------------------- 1.33s 2026-04-05 00:35:51.725979 | orchestrator | osism.services.fail2ban : Include distribution specific install tasks --- 1.25s 2026-04-05 00:35:51.973283 | orchestrator | + osism apply network 2026-04-05 00:36:03.383199 | orchestrator | 2026-04-05 00:36:03 | INFO  | Prepare task for execution of network. 2026-04-05 00:36:03.459967 | orchestrator | 2026-04-05 00:36:03 | INFO  | Task 9a2d5f57-0a39-47a5-9454-526f113aba68 (network) was prepared for execution. 2026-04-05 00:36:03.460140 | orchestrator | 2026-04-05 00:36:03 | INFO  | It takes a moment until task 9a2d5f57-0a39-47a5-9454-526f113aba68 (network) has been started and output is visible here. 2026-04-05 00:36:33.608233 | orchestrator | 2026-04-05 00:36:33.608353 | orchestrator | PLAY [Apply role network] ****************************************************** 2026-04-05 00:36:33.608370 | orchestrator | 2026-04-05 00:36:33.608384 | orchestrator | TASK [osism.commons.network : Gather variables for each operating system] ****** 2026-04-05 00:36:33.608396 | orchestrator | Sunday 05 April 2026 00:36:07 +0000 (0:00:00.426) 0:00:00.426 ********** 2026-04-05 00:36:33.608408 | orchestrator | ok: [testbed-manager] 2026-04-05 00:36:33.608420 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:36:33.608432 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:36:33.608442 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:36:33.608453 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:36:33.608464 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:36:33.608475 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:36:33.608486 | orchestrator | 2026-04-05 00:36:33.608497 | orchestrator | TASK [osism.commons.network : Include type specific tasks] ********************* 2026-04-05 00:36:33.608508 | orchestrator | Sunday 05 April 2026 00:36:07 +0000 (0:00:00.646) 0:00:01.073 ********** 2026-04-05 00:36:33.608521 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/netplan-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-05 00:36:33.608534 | orchestrator | 2026-04-05 00:36:33.608546 | orchestrator | TASK [osism.commons.network : Install required packages] *********************** 2026-04-05 00:36:33.608556 | orchestrator | Sunday 05 April 2026 00:36:09 +0000 (0:00:01.196) 0:00:02.269 ********** 2026-04-05 00:36:33.608567 | orchestrator | ok: [testbed-manager] 2026-04-05 00:36:33.608578 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:36:33.608589 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:36:33.608600 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:36:33.608611 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:36:33.608646 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:36:33.608658 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:36:33.608684 | orchestrator | 2026-04-05 00:36:33.608696 | orchestrator | TASK [osism.commons.network : Remove ifupdown package] ************************* 2026-04-05 00:36:33.608707 | orchestrator | Sunday 05 April 2026 00:36:11 +0000 (0:00:02.457) 0:00:04.727 ********** 2026-04-05 00:36:33.608735 | orchestrator | ok: [testbed-manager] 2026-04-05 00:36:33.608760 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:36:33.608773 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:36:33.608785 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:36:33.608797 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:36:33.608810 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:36:33.608822 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:36:33.608834 | orchestrator | 2026-04-05 00:36:33.608847 | orchestrator | TASK [osism.commons.network : Create required directories] ********************* 2026-04-05 00:36:33.608860 | orchestrator | Sunday 05 April 2026 00:36:13 +0000 (0:00:01.520) 0:00:06.248 ********** 2026-04-05 00:36:33.608873 | orchestrator | ok: [testbed-node-0] => (item=/etc/netplan) 2026-04-05 00:36:33.608886 | orchestrator | ok: [testbed-node-1] => (item=/etc/netplan) 2026-04-05 00:36:33.608898 | orchestrator | ok: [testbed-manager] => (item=/etc/netplan) 2026-04-05 00:36:33.608911 | orchestrator | ok: [testbed-node-2] => (item=/etc/netplan) 2026-04-05 00:36:33.608924 | orchestrator | ok: [testbed-node-3] => (item=/etc/netplan) 2026-04-05 00:36:33.608935 | orchestrator | ok: [testbed-node-4] => (item=/etc/netplan) 2026-04-05 00:36:33.608945 | orchestrator | ok: [testbed-node-5] => (item=/etc/netplan) 2026-04-05 00:36:33.608956 | orchestrator | 2026-04-05 00:36:33.608967 | orchestrator | TASK [osism.commons.network : Write network_netplan_config_template to temporary file] *** 2026-04-05 00:36:33.608979 | orchestrator | Sunday 05 April 2026 00:36:14 +0000 (0:00:01.252) 0:00:07.501 ********** 2026-04-05 00:36:33.608990 | orchestrator | skipping: [testbed-manager] 2026-04-05 00:36:33.609002 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:36:33.609013 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:36:33.609023 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:36:33.609034 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:36:33.609045 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:36:33.609056 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:36:33.609067 | orchestrator | 2026-04-05 00:36:33.609078 | orchestrator | TASK [osism.commons.network : Render netplan configuration from network_netplan_config_template variable] *** 2026-04-05 00:36:33.609136 | orchestrator | Sunday 05 April 2026 00:36:15 +0000 (0:00:00.747) 0:00:08.249 ********** 2026-04-05 00:36:33.609149 | orchestrator | skipping: [testbed-manager] 2026-04-05 00:36:33.609160 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:36:33.609171 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:36:33.609181 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:36:33.609192 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:36:33.609203 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:36:33.609213 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:36:33.609224 | orchestrator | 2026-04-05 00:36:33.609235 | orchestrator | TASK [osism.commons.network : Remove temporary network_netplan_config_template file] *** 2026-04-05 00:36:33.609246 | orchestrator | Sunday 05 April 2026 00:36:15 +0000 (0:00:00.831) 0:00:09.081 ********** 2026-04-05 00:36:33.609257 | orchestrator | skipping: [testbed-manager] 2026-04-05 00:36:33.609267 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:36:33.609278 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:36:33.609289 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:36:33.609299 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:36:33.609310 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:36:33.609321 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:36:33.609331 | orchestrator | 2026-04-05 00:36:33.609343 | orchestrator | TASK [osism.commons.network : Prepare netplan configuration template] ********** 2026-04-05 00:36:33.609354 | orchestrator | Sunday 05 April 2026 00:36:16 +0000 (0:00:00.839) 0:00:09.921 ********** 2026-04-05 00:36:33.609373 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-05 00:36:33.609384 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-04-05 00:36:33.609395 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-05 00:36:33.609405 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-04-05 00:36:33.609416 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-04-05 00:36:33.609427 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-04-05 00:36:33.609437 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-04-05 00:36:33.609448 | orchestrator | 2026-04-05 00:36:33.609476 | orchestrator | TASK [osism.commons.network : Copy netplan configuration] ********************** 2026-04-05 00:36:33.609488 | orchestrator | Sunday 05 April 2026 00:36:20 +0000 (0:00:03.598) 0:00:13.519 ********** 2026-04-05 00:36:33.609499 | orchestrator | changed: [testbed-manager] 2026-04-05 00:36:33.609509 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:36:33.609520 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:36:33.609531 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:36:33.609559 | orchestrator | changed: [testbed-node-3] 2026-04-05 00:36:33.609571 | orchestrator | changed: [testbed-node-4] 2026-04-05 00:36:33.609582 | orchestrator | changed: [testbed-node-5] 2026-04-05 00:36:33.609592 | orchestrator | 2026-04-05 00:36:33.609603 | orchestrator | TASK [osism.commons.network : Remove netplan configuration template] *********** 2026-04-05 00:36:33.609614 | orchestrator | Sunday 05 April 2026 00:36:22 +0000 (0:00:01.662) 0:00:15.182 ********** 2026-04-05 00:36:33.609625 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-04-05 00:36:33.609635 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-05 00:36:33.609646 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-05 00:36:33.609657 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-04-05 00:36:33.609667 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-04-05 00:36:33.609678 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-04-05 00:36:33.609688 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-04-05 00:36:33.609699 | orchestrator | 2026-04-05 00:36:33.609710 | orchestrator | TASK [osism.commons.network : Check if path for interface file exists] ********* 2026-04-05 00:36:33.609721 | orchestrator | Sunday 05 April 2026 00:36:23 +0000 (0:00:01.945) 0:00:17.128 ********** 2026-04-05 00:36:33.609731 | orchestrator | ok: [testbed-manager] 2026-04-05 00:36:33.609742 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:36:33.609753 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:36:33.609764 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:36:33.609774 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:36:33.609785 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:36:33.609796 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:36:33.609806 | orchestrator | 2026-04-05 00:36:33.609817 | orchestrator | TASK [osism.commons.network : Copy interfaces file] **************************** 2026-04-05 00:36:33.609828 | orchestrator | Sunday 05 April 2026 00:36:25 +0000 (0:00:01.144) 0:00:18.272 ********** 2026-04-05 00:36:33.609839 | orchestrator | skipping: [testbed-manager] 2026-04-05 00:36:33.609850 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:36:33.609860 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:36:33.609871 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:36:33.609882 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:36:33.609893 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:36:33.609903 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:36:33.609914 | orchestrator | 2026-04-05 00:36:33.609925 | orchestrator | TASK [osism.commons.network : Install package networkd-dispatcher] ************* 2026-04-05 00:36:33.609936 | orchestrator | Sunday 05 April 2026 00:36:25 +0000 (0:00:00.652) 0:00:18.925 ********** 2026-04-05 00:36:33.609946 | orchestrator | ok: [testbed-manager] 2026-04-05 00:36:33.609957 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:36:33.609968 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:36:33.609978 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:36:33.609989 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:36:33.610005 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:36:33.610076 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:36:33.610107 | orchestrator | 2026-04-05 00:36:33.610127 | orchestrator | TASK [osism.commons.network : Copy dispatcher scripts] ************************* 2026-04-05 00:36:33.610138 | orchestrator | Sunday 05 April 2026 00:36:27 +0000 (0:00:02.206) 0:00:21.132 ********** 2026-04-05 00:36:33.610149 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:36:33.610160 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:36:33.610171 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:36:33.610182 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:36:33.610193 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:36:33.610204 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:36:33.610214 | orchestrator | changed: [testbed-manager] => (item={'src': '/opt/configuration/network/iptables.sh', 'dest': 'routable.d/iptables.sh'}) 2026-04-05 00:36:33.610227 | orchestrator | 2026-04-05 00:36:33.610238 | orchestrator | TASK [osism.commons.network : Manage service networkd-dispatcher] ************** 2026-04-05 00:36:33.610249 | orchestrator | Sunday 05 April 2026 00:36:28 +0000 (0:00:00.939) 0:00:22.072 ********** 2026-04-05 00:36:33.610260 | orchestrator | ok: [testbed-manager] 2026-04-05 00:36:33.610270 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:36:33.610281 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:36:33.610292 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:36:33.610302 | orchestrator | changed: [testbed-node-3] 2026-04-05 00:36:33.610313 | orchestrator | changed: [testbed-node-4] 2026-04-05 00:36:33.610323 | orchestrator | changed: [testbed-node-5] 2026-04-05 00:36:33.610334 | orchestrator | 2026-04-05 00:36:33.610345 | orchestrator | TASK [osism.commons.network : Include cleanup tasks] *************************** 2026-04-05 00:36:33.610356 | orchestrator | Sunday 05 April 2026 00:36:30 +0000 (0:00:01.706) 0:00:23.778 ********** 2026-04-05 00:36:33.610368 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/cleanup-netplan.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-05 00:36:33.610381 | orchestrator | 2026-04-05 00:36:33.610391 | orchestrator | TASK [osism.commons.network : List existing configuration files] *************** 2026-04-05 00:36:33.610402 | orchestrator | Sunday 05 April 2026 00:36:31 +0000 (0:00:01.282) 0:00:25.060 ********** 2026-04-05 00:36:33.610413 | orchestrator | ok: [testbed-manager] 2026-04-05 00:36:33.610424 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:36:33.610434 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:36:33.610445 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:36:33.610456 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:36:33.610466 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:36:33.610477 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:36:33.610487 | orchestrator | 2026-04-05 00:36:33.610498 | orchestrator | TASK [osism.commons.network : Set network_configured_files fact] *************** 2026-04-05 00:36:33.610509 | orchestrator | Sunday 05 April 2026 00:36:33 +0000 (0:00:01.147) 0:00:26.207 ********** 2026-04-05 00:36:33.610520 | orchestrator | ok: [testbed-manager] 2026-04-05 00:36:33.610531 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:36:33.610541 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:36:33.610552 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:36:33.610563 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:36:33.610582 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:36:51.348554 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:36:51.348636 | orchestrator | 2026-04-05 00:36:51.348643 | orchestrator | TASK [osism.commons.network : Remove unused configuration files] *************** 2026-04-05 00:36:51.348649 | orchestrator | Sunday 05 April 2026 00:36:33 +0000 (0:00:00.706) 0:00:26.914 ********** 2026-04-05 00:36:51.348655 | orchestrator | skipping: [testbed-manager] => (item=/etc/netplan/01-osism.yaml)  2026-04-05 00:36:51.348659 | orchestrator | skipping: [testbed-node-0] => (item=/etc/netplan/01-osism.yaml)  2026-04-05 00:36:51.348664 | orchestrator | skipping: [testbed-node-1] => (item=/etc/netplan/01-osism.yaml)  2026-04-05 00:36:51.348668 | orchestrator | skipping: [testbed-node-2] => (item=/etc/netplan/01-osism.yaml)  2026-04-05 00:36:51.348672 | orchestrator | changed: [testbed-manager] => (item=/etc/netplan/50-cloud-init.yaml) 2026-04-05 00:36:51.348689 | orchestrator | skipping: [testbed-node-3] => (item=/etc/netplan/01-osism.yaml)  2026-04-05 00:36:51.348693 | orchestrator | changed: [testbed-node-0] => (item=/etc/netplan/50-cloud-init.yaml) 2026-04-05 00:36:51.348697 | orchestrator | changed: [testbed-node-1] => (item=/etc/netplan/50-cloud-init.yaml) 2026-04-05 00:36:51.348701 | orchestrator | changed: [testbed-node-2] => (item=/etc/netplan/50-cloud-init.yaml) 2026-04-05 00:36:51.348704 | orchestrator | skipping: [testbed-node-4] => (item=/etc/netplan/01-osism.yaml)  2026-04-05 00:36:51.348708 | orchestrator | changed: [testbed-node-3] => (item=/etc/netplan/50-cloud-init.yaml) 2026-04-05 00:36:51.348712 | orchestrator | skipping: [testbed-node-5] => (item=/etc/netplan/01-osism.yaml)  2026-04-05 00:36:51.348716 | orchestrator | changed: [testbed-node-4] => (item=/etc/netplan/50-cloud-init.yaml) 2026-04-05 00:36:51.348720 | orchestrator | changed: [testbed-node-5] => (item=/etc/netplan/50-cloud-init.yaml) 2026-04-05 00:36:51.348724 | orchestrator | 2026-04-05 00:36:51.348727 | orchestrator | TASK [osism.commons.network : Include dummy interfaces] ************************ 2026-04-05 00:36:51.348731 | orchestrator | Sunday 05 April 2026 00:36:35 +0000 (0:00:01.278) 0:00:28.192 ********** 2026-04-05 00:36:51.348735 | orchestrator | skipping: [testbed-manager] 2026-04-05 00:36:51.348739 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:36:51.348743 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:36:51.348747 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:36:51.348751 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:36:51.348755 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:36:51.348758 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:36:51.348762 | orchestrator | 2026-04-05 00:36:51.348766 | orchestrator | TASK [osism.commons.network : Include vxlan interfaces] ************************ 2026-04-05 00:36:51.348770 | orchestrator | Sunday 05 April 2026 00:36:35 +0000 (0:00:00.682) 0:00:28.875 ********** 2026-04-05 00:36:51.348786 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/vxlan-interfaces.yml for testbed-node-2, testbed-node-0, testbed-manager, testbed-node-1, testbed-node-4, testbed-node-3, testbed-node-5 2026-04-05 00:36:51.348792 | orchestrator | 2026-04-05 00:36:51.348796 | orchestrator | TASK [osism.commons.network : Create systemd networkd netdev files] ************ 2026-04-05 00:36:51.348800 | orchestrator | Sunday 05 April 2026 00:36:40 +0000 (0:00:04.786) 0:00:33.661 ********** 2026-04-05 00:36:51.348813 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan0', 'value': {'vni': 42, 'mtu': 1350, 'local_ip': '192.168.16.5', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'addresses': ['192.168.112.5/20']}}) 2026-04-05 00:36:51.348820 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan0', 'value': {'vni': 42, 'mtu': 1350, 'local_ip': '192.168.16.11', 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'addresses': []}}) 2026-04-05 00:36:51.348824 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan0', 'value': {'vni': 42, 'mtu': 1350, 'local_ip': '192.168.16.10', 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'addresses': []}}) 2026-04-05 00:36:51.348828 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan0', 'value': {'vni': 42, 'mtu': 1350, 'local_ip': '192.168.16.14', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'addresses': []}}) 2026-04-05 00:36:51.348832 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan0', 'value': {'vni': 42, 'mtu': 1350, 'local_ip': '192.168.16.13', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'addresses': []}}) 2026-04-05 00:36:51.348836 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan0', 'value': {'vni': 42, 'mtu': 1350, 'local_ip': '192.168.16.12', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'addresses': []}}) 2026-04-05 00:36:51.348853 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan0', 'value': {'vni': 42, 'mtu': 1350, 'local_ip': '192.168.16.15', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'addresses': []}}) 2026-04-05 00:36:51.348858 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan1', 'value': {'vni': 23, 'mtu': 1350, 'local_ip': '192.168.16.5', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'addresses': ['192.168.128.5/20']}}) 2026-04-05 00:36:51.348866 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan1', 'value': {'vni': 23, 'mtu': 1350, 'local_ip': '192.168.16.11', 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'addresses': ['192.168.128.11/20']}}) 2026-04-05 00:36:51.348870 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan1', 'value': {'vni': 23, 'mtu': 1350, 'local_ip': '192.168.16.10', 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'addresses': ['192.168.128.10/20']}}) 2026-04-05 00:36:51.348874 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan1', 'value': {'vni': 23, 'mtu': 1350, 'local_ip': '192.168.16.14', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'addresses': ['192.168.128.14/20']}}) 2026-04-05 00:36:51.348878 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan1', 'value': {'vni': 23, 'mtu': 1350, 'local_ip': '192.168.16.13', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'addresses': ['192.168.128.13/20']}}) 2026-04-05 00:36:51.348882 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan1', 'value': {'vni': 23, 'mtu': 1350, 'local_ip': '192.168.16.12', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'addresses': ['192.168.128.12/20']}}) 2026-04-05 00:36:51.348886 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan1', 'value': {'vni': 23, 'mtu': 1350, 'local_ip': '192.168.16.15', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'addresses': ['192.168.128.15/20']}}) 2026-04-05 00:36:51.348890 | orchestrator | 2026-04-05 00:36:51.348896 | orchestrator | TASK [osism.commons.network : Create systemd networkd network files] *********** 2026-04-05 00:36:51.348900 | orchestrator | Sunday 05 April 2026 00:36:46 +0000 (0:00:05.728) 0:00:39.390 ********** 2026-04-05 00:36:51.348904 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan0', 'value': {'vni': 42, 'mtu': 1350, 'local_ip': '192.168.16.12', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'addresses': []}}) 2026-04-05 00:36:51.348908 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan0', 'value': {'vni': 42, 'mtu': 1350, 'local_ip': '192.168.16.5', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'addresses': ['192.168.112.5/20']}}) 2026-04-05 00:36:51.348912 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan0', 'value': {'vni': 42, 'mtu': 1350, 'local_ip': '192.168.16.11', 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'addresses': []}}) 2026-04-05 00:36:51.348916 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan0', 'value': {'vni': 42, 'mtu': 1350, 'local_ip': '192.168.16.13', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'addresses': []}}) 2026-04-05 00:36:51.348920 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan0', 'value': {'vni': 42, 'mtu': 1350, 'local_ip': '192.168.16.10', 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'addresses': []}}) 2026-04-05 00:36:51.348928 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan1', 'value': {'vni': 23, 'mtu': 1350, 'local_ip': '192.168.16.12', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'addresses': ['192.168.128.12/20']}}) 2026-04-05 00:36:51.348932 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan1', 'value': {'vni': 23, 'mtu': 1350, 'local_ip': '192.168.16.5', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'addresses': ['192.168.128.5/20']}}) 2026-04-05 00:36:51.348940 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan0', 'value': {'vni': 42, 'mtu': 1350, 'local_ip': '192.168.16.14', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'addresses': []}}) 2026-04-05 00:37:04.028586 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan0', 'value': {'vni': 42, 'mtu': 1350, 'local_ip': '192.168.16.15', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'addresses': []}}) 2026-04-05 00:37:04.028698 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan1', 'value': {'vni': 23, 'mtu': 1350, 'local_ip': '192.168.16.11', 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'addresses': ['192.168.128.11/20']}}) 2026-04-05 00:37:04.028717 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan1', 'value': {'vni': 23, 'mtu': 1350, 'local_ip': '192.168.16.13', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'addresses': ['192.168.128.13/20']}}) 2026-04-05 00:37:04.028729 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan1', 'value': {'vni': 23, 'mtu': 1350, 'local_ip': '192.168.16.10', 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'addresses': ['192.168.128.10/20']}}) 2026-04-05 00:37:04.028741 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan1', 'value': {'vni': 23, 'mtu': 1350, 'local_ip': '192.168.16.14', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'addresses': ['192.168.128.14/20']}}) 2026-04-05 00:37:04.028752 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan1', 'value': {'vni': 23, 'mtu': 1350, 'local_ip': '192.168.16.15', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'addresses': ['192.168.128.15/20']}}) 2026-04-05 00:37:04.028764 | orchestrator | 2026-04-05 00:37:04.028776 | orchestrator | TASK [osism.commons.network : Include networkd cleanup tasks] ****************** 2026-04-05 00:37:04.028789 | orchestrator | Sunday 05 April 2026 00:36:52 +0000 (0:00:06.061) 0:00:45.452 ********** 2026-04-05 00:37:04.028818 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/cleanup-networkd.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-05 00:37:04.028830 | orchestrator | 2026-04-05 00:37:04.028842 | orchestrator | TASK [osism.commons.network : List existing configuration files] *************** 2026-04-05 00:37:04.028853 | orchestrator | Sunday 05 April 2026 00:36:53 +0000 (0:00:01.463) 0:00:46.915 ********** 2026-04-05 00:37:04.028864 | orchestrator | ok: [testbed-manager] 2026-04-05 00:37:04.028877 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:37:04.028888 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:37:04.028900 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:37:04.028911 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:37:04.028922 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:37:04.028933 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:37:04.028967 | orchestrator | 2026-04-05 00:37:04.028979 | orchestrator | TASK [osism.commons.network : Remove unused configuration files] *************** 2026-04-05 00:37:04.028990 | orchestrator | Sunday 05 April 2026 00:36:54 +0000 (0:00:01.010) 0:00:47.926 ********** 2026-04-05 00:37:04.029001 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan1.network)  2026-04-05 00:37:04.029013 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan0.network)  2026-04-05 00:37:04.029024 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-04-05 00:37:04.029035 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-04-05 00:37:04.029046 | orchestrator | skipping: [testbed-manager] 2026-04-05 00:37:04.029058 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan1.network)  2026-04-05 00:37:04.029069 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan0.network)  2026-04-05 00:37:04.029080 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-04-05 00:37:04.029091 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-04-05 00:37:04.029102 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:37:04.029114 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan1.network)  2026-04-05 00:37:04.029150 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan0.network)  2026-04-05 00:37:04.029165 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-04-05 00:37:04.029177 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-04-05 00:37:04.029190 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:37:04.029203 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan1.network)  2026-04-05 00:37:04.029216 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan0.network)  2026-04-05 00:37:04.029229 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-04-05 00:37:04.029259 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-04-05 00:37:04.029273 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:37:04.029286 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan1.network)  2026-04-05 00:37:04.029300 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan0.network)  2026-04-05 00:37:04.029313 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-04-05 00:37:04.029328 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-04-05 00:37:04.029341 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:37:04.029353 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan1.network)  2026-04-05 00:37:04.029367 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan0.network)  2026-04-05 00:37:04.029380 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-04-05 00:37:04.029392 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-04-05 00:37:04.029406 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:37:04.029419 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan1.network)  2026-04-05 00:37:04.029433 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan0.network)  2026-04-05 00:37:04.029447 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-04-05 00:37:04.029461 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-04-05 00:37:04.029472 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:37:04.029483 | orchestrator | 2026-04-05 00:37:04.029494 | orchestrator | TASK [osism.commons.network : Include network extra init] ********************** 2026-04-05 00:37:04.029513 | orchestrator | Sunday 05 April 2026 00:36:55 +0000 (0:00:01.009) 0:00:48.935 ********** 2026-04-05 00:37:04.029525 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/network-extra-init.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-05 00:37:04.029536 | orchestrator | 2026-04-05 00:37:04.029547 | orchestrator | TASK [osism.commons.network : Deploy network-extra-init script] **************** 2026-04-05 00:37:04.029558 | orchestrator | Sunday 05 April 2026 00:36:57 +0000 (0:00:01.340) 0:00:50.275 ********** 2026-04-05 00:37:04.029576 | orchestrator | skipping: [testbed-manager] 2026-04-05 00:37:04.029587 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:37:04.029598 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:37:04.029609 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:37:04.029621 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:37:04.029632 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:37:04.029643 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:37:04.029654 | orchestrator | 2026-04-05 00:37:04.029665 | orchestrator | TASK [osism.commons.network : Deploy network-extra-init systemd service] ******* 2026-04-05 00:37:04.029676 | orchestrator | Sunday 05 April 2026 00:36:57 +0000 (0:00:00.688) 0:00:50.963 ********** 2026-04-05 00:37:04.029687 | orchestrator | skipping: [testbed-manager] 2026-04-05 00:37:04.029698 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:37:04.029709 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:37:04.029720 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:37:04.029731 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:37:04.029742 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:37:04.029753 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:37:04.029763 | orchestrator | 2026-04-05 00:37:04.029775 | orchestrator | TASK [osism.commons.network : Enable and start network-extra-init service] ***** 2026-04-05 00:37:04.029786 | orchestrator | Sunday 05 April 2026 00:36:58 +0000 (0:00:00.778) 0:00:51.742 ********** 2026-04-05 00:37:04.029797 | orchestrator | skipping: [testbed-manager] 2026-04-05 00:37:04.029807 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:37:04.029818 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:37:04.029829 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:37:04.029840 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:37:04.029851 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:37:04.029862 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:37:04.029873 | orchestrator | 2026-04-05 00:37:04.029884 | orchestrator | TASK [osism.commons.network : Disable and stop network-extra-init service] ***** 2026-04-05 00:37:04.029895 | orchestrator | Sunday 05 April 2026 00:36:59 +0000 (0:00:00.611) 0:00:52.354 ********** 2026-04-05 00:37:04.029906 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:37:04.029917 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:37:04.029928 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:37:04.029939 | orchestrator | ok: [testbed-manager] 2026-04-05 00:37:04.029950 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:37:04.029961 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:37:04.029972 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:37:04.029982 | orchestrator | 2026-04-05 00:37:04.029994 | orchestrator | TASK [osism.commons.network : Remove network-extra-init systemd service] ******* 2026-04-05 00:37:04.030005 | orchestrator | Sunday 05 April 2026 00:37:00 +0000 (0:00:01.669) 0:00:54.024 ********** 2026-04-05 00:37:04.030078 | orchestrator | ok: [testbed-manager] 2026-04-05 00:37:04.030093 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:37:04.030105 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:37:04.030115 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:37:04.030147 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:37:04.030158 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:37:04.030169 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:37:04.030180 | orchestrator | 2026-04-05 00:37:04.030191 | orchestrator | TASK [osism.commons.network : Remove network-extra-init script] **************** 2026-04-05 00:37:04.030210 | orchestrator | Sunday 05 April 2026 00:37:01 +0000 (0:00:01.126) 0:00:55.150 ********** 2026-04-05 00:37:04.030221 | orchestrator | ok: [testbed-manager] 2026-04-05 00:37:04.030232 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:37:04.030243 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:37:04.030254 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:37:04.030265 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:37:04.030275 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:37:04.030286 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:37:04.030297 | orchestrator | 2026-04-05 00:37:04.030316 | orchestrator | RUNNING HANDLER [osism.commons.network : Reload systemd-networkd] ************** 2026-04-05 00:37:05.831202 | orchestrator | Sunday 05 April 2026 00:37:04 +0000 (0:00:02.031) 0:00:57.181 ********** 2026-04-05 00:37:05.831308 | orchestrator | skipping: [testbed-manager] 2026-04-05 00:37:05.831325 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:37:05.831338 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:37:05.831349 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:37:05.831360 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:37:05.831371 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:37:05.831382 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:37:05.831393 | orchestrator | 2026-04-05 00:37:05.831404 | orchestrator | RUNNING HANDLER [osism.commons.network : Netplan configuration changed] ******** 2026-04-05 00:37:05.831416 | orchestrator | Sunday 05 April 2026 00:37:04 +0000 (0:00:00.859) 0:00:58.041 ********** 2026-04-05 00:37:05.831427 | orchestrator | skipping: [testbed-manager] 2026-04-05 00:37:05.831438 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:37:05.831448 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:37:05.831459 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:37:05.831470 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:37:05.831481 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:37:05.831492 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:37:05.831502 | orchestrator | 2026-04-05 00:37:05.831514 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-05 00:37:05.831526 | orchestrator | testbed-manager : ok=25  changed=5  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-04-05 00:37:05.831539 | orchestrator | testbed-node-0 : ok=24  changed=5  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-04-05 00:37:05.831550 | orchestrator | testbed-node-1 : ok=24  changed=5  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-04-05 00:37:05.831561 | orchestrator | testbed-node-2 : ok=24  changed=5  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-04-05 00:37:05.831571 | orchestrator | testbed-node-3 : ok=24  changed=5  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-04-05 00:37:05.831602 | orchestrator | testbed-node-4 : ok=24  changed=5  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-04-05 00:37:05.831613 | orchestrator | testbed-node-5 : ok=24  changed=5  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-04-05 00:37:05.831629 | orchestrator | 2026-04-05 00:37:05.831641 | orchestrator | 2026-04-05 00:37:05.831652 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-05 00:37:05.831663 | orchestrator | Sunday 05 April 2026 00:37:05 +0000 (0:00:00.572) 0:00:58.614 ********** 2026-04-05 00:37:05.831674 | orchestrator | =============================================================================== 2026-04-05 00:37:05.831685 | orchestrator | osism.commons.network : Create systemd networkd network files ----------- 6.06s 2026-04-05 00:37:05.831696 | orchestrator | osism.commons.network : Create systemd networkd netdev files ------------ 5.73s 2026-04-05 00:37:05.831707 | orchestrator | osism.commons.network : Include vxlan interfaces ------------------------ 4.79s 2026-04-05 00:37:05.831740 | orchestrator | osism.commons.network : Prepare netplan configuration template ---------- 3.60s 2026-04-05 00:37:05.831751 | orchestrator | osism.commons.network : Install required packages ----------------------- 2.46s 2026-04-05 00:37:05.831762 | orchestrator | osism.commons.network : Install package networkd-dispatcher ------------- 2.21s 2026-04-05 00:37:05.831773 | orchestrator | osism.commons.network : Remove network-extra-init script ---------------- 2.03s 2026-04-05 00:37:05.831783 | orchestrator | osism.commons.network : Remove netplan configuration template ----------- 1.95s 2026-04-05 00:37:05.831794 | orchestrator | osism.commons.network : Manage service networkd-dispatcher -------------- 1.71s 2026-04-05 00:37:05.831805 | orchestrator | osism.commons.network : Disable and stop network-extra-init service ----- 1.67s 2026-04-05 00:37:05.831816 | orchestrator | osism.commons.network : Copy netplan configuration ---------------------- 1.66s 2026-04-05 00:37:05.831827 | orchestrator | osism.commons.network : Remove ifupdown package ------------------------- 1.52s 2026-04-05 00:37:05.831837 | orchestrator | osism.commons.network : Include networkd cleanup tasks ------------------ 1.46s 2026-04-05 00:37:05.831848 | orchestrator | osism.commons.network : Include network extra init ---------------------- 1.34s 2026-04-05 00:37:05.831859 | orchestrator | osism.commons.network : Include cleanup tasks --------------------------- 1.28s 2026-04-05 00:37:05.831869 | orchestrator | osism.commons.network : Remove unused configuration files --------------- 1.28s 2026-04-05 00:37:05.831880 | orchestrator | osism.commons.network : Create required directories --------------------- 1.25s 2026-04-05 00:37:05.831891 | orchestrator | osism.commons.network : Include type specific tasks --------------------- 1.20s 2026-04-05 00:37:05.831902 | orchestrator | osism.commons.network : List existing configuration files --------------- 1.15s 2026-04-05 00:37:05.831913 | orchestrator | osism.commons.network : Check if path for interface file exists --------- 1.14s 2026-04-05 00:37:06.044800 | orchestrator | + osism apply wireguard 2026-04-05 00:37:17.360936 | orchestrator | 2026-04-05 00:37:17 | INFO  | Prepare task for execution of wireguard. 2026-04-05 00:37:17.439675 | orchestrator | 2026-04-05 00:37:17 | INFO  | Task 64a5752f-b8f1-410c-a2ce-4e358b518e30 (wireguard) was prepared for execution. 2026-04-05 00:37:17.439763 | orchestrator | 2026-04-05 00:37:17 | INFO  | It takes a moment until task 64a5752f-b8f1-410c-a2ce-4e358b518e30 (wireguard) has been started and output is visible here. 2026-04-05 00:37:37.852463 | orchestrator | 2026-04-05 00:37:37.852577 | orchestrator | PLAY [Apply role wireguard] **************************************************** 2026-04-05 00:37:37.852595 | orchestrator | 2026-04-05 00:37:37.852608 | orchestrator | TASK [osism.services.wireguard : Install iptables package] ********************* 2026-04-05 00:37:37.852621 | orchestrator | Sunday 05 April 2026 00:37:20 +0000 (0:00:00.309) 0:00:00.309 ********** 2026-04-05 00:37:37.852633 | orchestrator | ok: [testbed-manager] 2026-04-05 00:37:37.852645 | orchestrator | 2026-04-05 00:37:37.852657 | orchestrator | TASK [osism.services.wireguard : Install wireguard package] ******************** 2026-04-05 00:37:37.852668 | orchestrator | Sunday 05 April 2026 00:37:22 +0000 (0:00:01.900) 0:00:02.210 ********** 2026-04-05 00:37:37.852679 | orchestrator | changed: [testbed-manager] 2026-04-05 00:37:37.852692 | orchestrator | 2026-04-05 00:37:37.852703 | orchestrator | TASK [osism.services.wireguard : Create public and private key - server] ******* 2026-04-05 00:37:37.852714 | orchestrator | Sunday 05 April 2026 00:37:29 +0000 (0:00:07.235) 0:00:09.445 ********** 2026-04-05 00:37:37.852725 | orchestrator | changed: [testbed-manager] 2026-04-05 00:37:37.852736 | orchestrator | 2026-04-05 00:37:37.852747 | orchestrator | TASK [osism.services.wireguard : Create preshared key] ************************* 2026-04-05 00:37:37.852759 | orchestrator | Sunday 05 April 2026 00:37:30 +0000 (0:00:00.546) 0:00:09.991 ********** 2026-04-05 00:37:37.852770 | orchestrator | changed: [testbed-manager] 2026-04-05 00:37:37.852781 | orchestrator | 2026-04-05 00:37:37.852792 | orchestrator | TASK [osism.services.wireguard : Get preshared key] **************************** 2026-04-05 00:37:37.852803 | orchestrator | Sunday 05 April 2026 00:37:31 +0000 (0:00:00.463) 0:00:10.454 ********** 2026-04-05 00:37:37.852839 | orchestrator | ok: [testbed-manager] 2026-04-05 00:37:37.852851 | orchestrator | 2026-04-05 00:37:37.852862 | orchestrator | TASK [osism.services.wireguard : Get public key - server] ********************** 2026-04-05 00:37:37.852873 | orchestrator | Sunday 05 April 2026 00:37:31 +0000 (0:00:00.537) 0:00:10.991 ********** 2026-04-05 00:37:37.852884 | orchestrator | ok: [testbed-manager] 2026-04-05 00:37:37.852895 | orchestrator | 2026-04-05 00:37:37.852906 | orchestrator | TASK [osism.services.wireguard : Get private key - server] ********************* 2026-04-05 00:37:37.852917 | orchestrator | Sunday 05 April 2026 00:37:31 +0000 (0:00:00.419) 0:00:11.411 ********** 2026-04-05 00:37:37.852928 | orchestrator | ok: [testbed-manager] 2026-04-05 00:37:37.852939 | orchestrator | 2026-04-05 00:37:37.852950 | orchestrator | TASK [osism.services.wireguard : Copy wg0.conf configuration file] ************* 2026-04-05 00:37:37.852961 | orchestrator | Sunday 05 April 2026 00:37:32 +0000 (0:00:00.486) 0:00:11.897 ********** 2026-04-05 00:37:37.852972 | orchestrator | changed: [testbed-manager] 2026-04-05 00:37:37.852983 | orchestrator | 2026-04-05 00:37:37.852995 | orchestrator | TASK [osism.services.wireguard : Copy client configuration files] ************** 2026-04-05 00:37:37.853008 | orchestrator | Sunday 05 April 2026 00:37:33 +0000 (0:00:01.231) 0:00:13.129 ********** 2026-04-05 00:37:37.853021 | orchestrator | changed: [testbed-manager] => (item=None) 2026-04-05 00:37:37.853033 | orchestrator | changed: [testbed-manager] 2026-04-05 00:37:37.853046 | orchestrator | 2026-04-05 00:37:37.853059 | orchestrator | TASK [osism.services.wireguard : Manage wg-quick@wg0.service service] ********** 2026-04-05 00:37:37.853071 | orchestrator | Sunday 05 April 2026 00:37:34 +0000 (0:00:00.985) 0:00:14.115 ********** 2026-04-05 00:37:37.853099 | orchestrator | changed: [testbed-manager] 2026-04-05 00:37:37.853113 | orchestrator | 2026-04-05 00:37:37.853126 | orchestrator | RUNNING HANDLER [osism.services.wireguard : Restart wg0 service] *************** 2026-04-05 00:37:37.853139 | orchestrator | Sunday 05 April 2026 00:37:36 +0000 (0:00:01.999) 0:00:16.115 ********** 2026-04-05 00:37:37.853152 | orchestrator | changed: [testbed-manager] 2026-04-05 00:37:37.853187 | orchestrator | 2026-04-05 00:37:37.853200 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-05 00:37:37.853214 | orchestrator | testbed-manager : ok=11  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-05 00:37:37.853227 | orchestrator | 2026-04-05 00:37:37.853240 | orchestrator | 2026-04-05 00:37:37.853252 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-05 00:37:37.853265 | orchestrator | Sunday 05 April 2026 00:37:37 +0000 (0:00:00.943) 0:00:17.058 ********** 2026-04-05 00:37:37.853277 | orchestrator | =============================================================================== 2026-04-05 00:37:37.853289 | orchestrator | osism.services.wireguard : Install wireguard package -------------------- 7.24s 2026-04-05 00:37:37.853303 | orchestrator | osism.services.wireguard : Manage wg-quick@wg0.service service ---------- 2.00s 2026-04-05 00:37:37.853315 | orchestrator | osism.services.wireguard : Install iptables package --------------------- 1.90s 2026-04-05 00:37:37.853328 | orchestrator | osism.services.wireguard : Copy wg0.conf configuration file ------------- 1.23s 2026-04-05 00:37:37.853341 | orchestrator | osism.services.wireguard : Copy client configuration files -------------- 0.99s 2026-04-05 00:37:37.853353 | orchestrator | osism.services.wireguard : Restart wg0 service -------------------------- 0.94s 2026-04-05 00:37:37.853364 | orchestrator | osism.services.wireguard : Create public and private key - server ------- 0.55s 2026-04-05 00:37:37.853375 | orchestrator | osism.services.wireguard : Get preshared key ---------------------------- 0.54s 2026-04-05 00:37:37.853386 | orchestrator | osism.services.wireguard : Get private key - server --------------------- 0.49s 2026-04-05 00:37:37.853397 | orchestrator | osism.services.wireguard : Create preshared key ------------------------- 0.46s 2026-04-05 00:37:37.853409 | orchestrator | osism.services.wireguard : Get public key - server ---------------------- 0.42s 2026-04-05 00:37:38.034788 | orchestrator | + sh -c /opt/configuration/scripts/prepare-wireguard-configuration.sh 2026-04-05 00:37:38.070717 | orchestrator | % Total % Received % Xferd Average Speed Time Time Time Current 2026-04-05 00:37:38.070838 | orchestrator | Dload Upload Total Spent Left Speed 2026-04-05 00:37:38.149835 | orchestrator | 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 15 100 15 0 0 190 0 --:--:-- --:--:-- --:--:-- 192 2026-04-05 00:37:38.162273 | orchestrator | + osism apply --environment custom workarounds 2026-04-05 00:37:39.427888 | orchestrator | 2026-04-05 00:37:39 | INFO  | Trying to run play workarounds in environment custom 2026-04-05 00:37:49.521844 | orchestrator | 2026-04-05 00:37:49 | INFO  | Prepare task for execution of workarounds. 2026-04-05 00:37:49.604266 | orchestrator | 2026-04-05 00:37:49 | INFO  | Task 6f363de2-3996-48ae-932d-332a5581f674 (workarounds) was prepared for execution. 2026-04-05 00:37:49.604365 | orchestrator | 2026-04-05 00:37:49 | INFO  | It takes a moment until task 6f363de2-3996-48ae-932d-332a5581f674 (workarounds) has been started and output is visible here. 2026-04-05 00:38:14.974365 | orchestrator | 2026-04-05 00:38:14.974455 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-05 00:38:14.974466 | orchestrator | 2026-04-05 00:38:14.974474 | orchestrator | TASK [Group hosts based on virtualization_role] ******************************** 2026-04-05 00:38:14.974481 | orchestrator | Sunday 05 April 2026 00:37:52 +0000 (0:00:00.189) 0:00:00.189 ********** 2026-04-05 00:38:14.974490 | orchestrator | changed: [testbed-manager] => (item=virtualization_role_guest) 2026-04-05 00:38:14.974498 | orchestrator | changed: [testbed-node-0] => (item=virtualization_role_guest) 2026-04-05 00:38:14.974504 | orchestrator | changed: [testbed-node-1] => (item=virtualization_role_guest) 2026-04-05 00:38:14.974511 | orchestrator | changed: [testbed-node-2] => (item=virtualization_role_guest) 2026-04-05 00:38:14.974518 | orchestrator | changed: [testbed-node-3] => (item=virtualization_role_guest) 2026-04-05 00:38:14.974525 | orchestrator | changed: [testbed-node-4] => (item=virtualization_role_guest) 2026-04-05 00:38:14.974531 | orchestrator | changed: [testbed-node-5] => (item=virtualization_role_guest) 2026-04-05 00:38:14.974538 | orchestrator | 2026-04-05 00:38:14.974545 | orchestrator | PLAY [Apply netplan configuration on the manager node] ************************* 2026-04-05 00:38:14.974552 | orchestrator | 2026-04-05 00:38:14.974559 | orchestrator | TASK [Apply netplan configuration] ********************************************* 2026-04-05 00:38:14.974564 | orchestrator | Sunday 05 April 2026 00:37:53 +0000 (0:00:00.758) 0:00:00.947 ********** 2026-04-05 00:38:14.974578 | orchestrator | ok: [testbed-manager] 2026-04-05 00:38:14.974586 | orchestrator | 2026-04-05 00:38:14.974592 | orchestrator | PLAY [Apply netplan configuration on all other nodes] ************************** 2026-04-05 00:38:14.974600 | orchestrator | 2026-04-05 00:38:14.974606 | orchestrator | TASK [Apply netplan configuration] ********************************************* 2026-04-05 00:38:14.974613 | orchestrator | Sunday 05 April 2026 00:37:56 +0000 (0:00:02.895) 0:00:03.843 ********** 2026-04-05 00:38:14.974620 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:38:14.974627 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:38:14.974633 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:38:14.974640 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:38:14.974647 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:38:14.974653 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:38:14.974660 | orchestrator | 2026-04-05 00:38:14.974667 | orchestrator | PLAY [Add custom CA certificates to non-manager nodes] ************************* 2026-04-05 00:38:14.974674 | orchestrator | 2026-04-05 00:38:14.974681 | orchestrator | TASK [Copy custom CA certificates] ********************************************* 2026-04-05 00:38:14.974688 | orchestrator | Sunday 05 April 2026 00:37:58 +0000 (0:00:02.471) 0:00:06.314 ********** 2026-04-05 00:38:14.974695 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-04-05 00:38:14.974702 | orchestrator | changed: [testbed-node-3] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-04-05 00:38:14.974725 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-04-05 00:38:14.974733 | orchestrator | changed: [testbed-node-4] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-04-05 00:38:14.974739 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-04-05 00:38:14.974746 | orchestrator | changed: [testbed-node-5] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-04-05 00:38:14.974752 | orchestrator | 2026-04-05 00:38:14.974759 | orchestrator | TASK [Run update-ca-certificates] ********************************************** 2026-04-05 00:38:14.974765 | orchestrator | Sunday 05 April 2026 00:38:00 +0000 (0:00:01.394) 0:00:07.709 ********** 2026-04-05 00:38:14.974772 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:38:14.974779 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:38:14.974785 | orchestrator | changed: [testbed-node-4] 2026-04-05 00:38:14.974792 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:38:14.974798 | orchestrator | changed: [testbed-node-3] 2026-04-05 00:38:14.974805 | orchestrator | changed: [testbed-node-5] 2026-04-05 00:38:14.974812 | orchestrator | 2026-04-05 00:38:14.974819 | orchestrator | TASK [Run update-ca-trust] ***************************************************** 2026-04-05 00:38:14.974826 | orchestrator | Sunday 05 April 2026 00:38:04 +0000 (0:00:04.043) 0:00:11.753 ********** 2026-04-05 00:38:14.974832 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:38:14.974839 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:38:14.974845 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:38:14.974852 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:38:14.974859 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:38:14.974866 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:38:14.974872 | orchestrator | 2026-04-05 00:38:14.974879 | orchestrator | PLAY [Add a workaround service] ************************************************ 2026-04-05 00:38:14.974885 | orchestrator | 2026-04-05 00:38:14.974892 | orchestrator | TASK [Copy workarounds.sh scripts] ********************************************* 2026-04-05 00:38:14.974898 | orchestrator | Sunday 05 April 2026 00:38:04 +0000 (0:00:00.558) 0:00:12.311 ********** 2026-04-05 00:38:14.974905 | orchestrator | changed: [testbed-manager] 2026-04-05 00:38:14.974911 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:38:14.974918 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:38:14.974926 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:38:14.974934 | orchestrator | changed: [testbed-node-3] 2026-04-05 00:38:14.974941 | orchestrator | changed: [testbed-node-4] 2026-04-05 00:38:14.974948 | orchestrator | changed: [testbed-node-5] 2026-04-05 00:38:14.974955 | orchestrator | 2026-04-05 00:38:14.974962 | orchestrator | TASK [Copy workarounds systemd unit file] ************************************** 2026-04-05 00:38:14.974969 | orchestrator | Sunday 05 April 2026 00:38:06 +0000 (0:00:01.824) 0:00:14.136 ********** 2026-04-05 00:38:14.974976 | orchestrator | changed: [testbed-manager] 2026-04-05 00:38:14.974983 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:38:14.974989 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:38:14.974996 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:38:14.975003 | orchestrator | changed: [testbed-node-3] 2026-04-05 00:38:14.975010 | orchestrator | changed: [testbed-node-4] 2026-04-05 00:38:14.975034 | orchestrator | changed: [testbed-node-5] 2026-04-05 00:38:14.975043 | orchestrator | 2026-04-05 00:38:14.975050 | orchestrator | TASK [Reload systemd daemon] *************************************************** 2026-04-05 00:38:14.975058 | orchestrator | Sunday 05 April 2026 00:38:08 +0000 (0:00:01.428) 0:00:15.564 ********** 2026-04-05 00:38:14.975066 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:38:14.975072 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:38:14.975079 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:38:14.975086 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:38:14.975093 | orchestrator | ok: [testbed-manager] 2026-04-05 00:38:14.975100 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:38:14.975106 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:38:14.975121 | orchestrator | 2026-04-05 00:38:14.975128 | orchestrator | TASK [Enable workarounds.service (Debian)] ************************************* 2026-04-05 00:38:14.975135 | orchestrator | Sunday 05 April 2026 00:38:09 +0000 (0:00:01.754) 0:00:17.319 ********** 2026-04-05 00:38:14.975141 | orchestrator | changed: [testbed-manager] 2026-04-05 00:38:14.975148 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:38:14.975154 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:38:14.975161 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:38:14.975168 | orchestrator | changed: [testbed-node-4] 2026-04-05 00:38:14.975175 | orchestrator | changed: [testbed-node-3] 2026-04-05 00:38:14.975182 | orchestrator | changed: [testbed-node-5] 2026-04-05 00:38:14.975189 | orchestrator | 2026-04-05 00:38:14.975195 | orchestrator | TASK [Enable and start workarounds.service (RedHat)] *************************** 2026-04-05 00:38:14.975233 | orchestrator | Sunday 05 April 2026 00:38:11 +0000 (0:00:01.516) 0:00:18.835 ********** 2026-04-05 00:38:14.975247 | orchestrator | skipping: [testbed-manager] 2026-04-05 00:38:14.975255 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:38:14.975261 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:38:14.975267 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:38:14.975273 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:38:14.975279 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:38:14.975285 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:38:14.975291 | orchestrator | 2026-04-05 00:38:14.975298 | orchestrator | PLAY [On Ubuntu 24.04 install python3-docker from Debian Sid] ****************** 2026-04-05 00:38:14.975305 | orchestrator | 2026-04-05 00:38:14.975312 | orchestrator | TASK [Install python3-docker] ************************************************** 2026-04-05 00:38:14.975319 | orchestrator | Sunday 05 April 2026 00:38:12 +0000 (0:00:00.814) 0:00:19.649 ********** 2026-04-05 00:38:14.975327 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:38:14.975333 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:38:14.975340 | orchestrator | ok: [testbed-manager] 2026-04-05 00:38:14.975346 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:38:14.975353 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:38:14.975359 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:38:14.975365 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:38:14.975371 | orchestrator | 2026-04-05 00:38:14.975378 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-05 00:38:14.975386 | orchestrator | testbed-manager : ok=7  changed=4  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-05 00:38:14.975394 | orchestrator | testbed-node-0 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-05 00:38:14.975401 | orchestrator | testbed-node-1 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-05 00:38:14.975407 | orchestrator | testbed-node-2 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-05 00:38:14.975413 | orchestrator | testbed-node-3 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-05 00:38:14.975420 | orchestrator | testbed-node-4 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-05 00:38:14.975426 | orchestrator | testbed-node-5 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-05 00:38:14.975433 | orchestrator | 2026-04-05 00:38:14.975440 | orchestrator | 2026-04-05 00:38:14.975446 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-05 00:38:14.975454 | orchestrator | Sunday 05 April 2026 00:38:14 +0000 (0:00:02.632) 0:00:22.281 ********** 2026-04-05 00:38:14.975460 | orchestrator | =============================================================================== 2026-04-05 00:38:14.975475 | orchestrator | Run update-ca-certificates ---------------------------------------------- 4.04s 2026-04-05 00:38:14.975481 | orchestrator | Apply netplan configuration --------------------------------------------- 2.90s 2026-04-05 00:38:14.975488 | orchestrator | Install python3-docker -------------------------------------------------- 2.63s 2026-04-05 00:38:14.975494 | orchestrator | Apply netplan configuration --------------------------------------------- 2.47s 2026-04-05 00:38:14.975501 | orchestrator | Copy workarounds.sh scripts --------------------------------------------- 1.82s 2026-04-05 00:38:14.975507 | orchestrator | Reload systemd daemon --------------------------------------------------- 1.75s 2026-04-05 00:38:14.975514 | orchestrator | Enable workarounds.service (Debian) ------------------------------------- 1.52s 2026-04-05 00:38:14.975520 | orchestrator | Copy workarounds systemd unit file -------------------------------------- 1.43s 2026-04-05 00:38:14.975527 | orchestrator | Copy custom CA certificates --------------------------------------------- 1.39s 2026-04-05 00:38:14.975534 | orchestrator | Enable and start workarounds.service (RedHat) --------------------------- 0.81s 2026-04-05 00:38:14.975541 | orchestrator | Group hosts based on virtualization_role -------------------------------- 0.76s 2026-04-05 00:38:14.975556 | orchestrator | Run update-ca-trust ----------------------------------------------------- 0.56s 2026-04-05 00:38:15.505883 | orchestrator | + osism apply reboot -l testbed-nodes -e ireallymeanit=yes 2026-04-05 00:38:26.896296 | orchestrator | 2026-04-05 00:38:26 | INFO  | Prepare task for execution of reboot. 2026-04-05 00:38:26.980273 | orchestrator | 2026-04-05 00:38:26 | INFO  | Task d935c78e-ab2f-4d4d-98e3-a1ec0ca6a738 (reboot) was prepared for execution. 2026-04-05 00:38:26.980357 | orchestrator | 2026-04-05 00:38:26 | INFO  | It takes a moment until task d935c78e-ab2f-4d4d-98e3-a1ec0ca6a738 (reboot) has been started and output is visible here. 2026-04-05 00:38:37.985129 | orchestrator | 2026-04-05 00:38:37.985305 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-04-05 00:38:37.985325 | orchestrator | 2026-04-05 00:38:37.985337 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-04-05 00:38:37.985349 | orchestrator | Sunday 05 April 2026 00:38:29 +0000 (0:00:00.227) 0:00:00.227 ********** 2026-04-05 00:38:37.985361 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:38:37.985373 | orchestrator | 2026-04-05 00:38:37.985384 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-04-05 00:38:37.985395 | orchestrator | Sunday 05 April 2026 00:38:30 +0000 (0:00:00.134) 0:00:00.362 ********** 2026-04-05 00:38:37.985406 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:38:37.985417 | orchestrator | 2026-04-05 00:38:37.985444 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-04-05 00:38:37.985456 | orchestrator | Sunday 05 April 2026 00:38:31 +0000 (0:00:01.239) 0:00:01.601 ********** 2026-04-05 00:38:37.985467 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:38:37.985478 | orchestrator | 2026-04-05 00:38:37.985489 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-04-05 00:38:37.985499 | orchestrator | 2026-04-05 00:38:37.985510 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-04-05 00:38:37.985521 | orchestrator | Sunday 05 April 2026 00:38:31 +0000 (0:00:00.122) 0:00:01.724 ********** 2026-04-05 00:38:37.985532 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:38:37.985543 | orchestrator | 2026-04-05 00:38:37.985554 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-04-05 00:38:37.985565 | orchestrator | Sunday 05 April 2026 00:38:31 +0000 (0:00:00.093) 0:00:01.817 ********** 2026-04-05 00:38:37.985576 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:38:37.985587 | orchestrator | 2026-04-05 00:38:37.985598 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-04-05 00:38:37.985609 | orchestrator | Sunday 05 April 2026 00:38:32 +0000 (0:00:01.003) 0:00:02.820 ********** 2026-04-05 00:38:37.985620 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:38:37.985653 | orchestrator | 2026-04-05 00:38:37.985667 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-04-05 00:38:37.985680 | orchestrator | 2026-04-05 00:38:37.985693 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-04-05 00:38:37.985707 | orchestrator | Sunday 05 April 2026 00:38:32 +0000 (0:00:00.114) 0:00:02.935 ********** 2026-04-05 00:38:37.985719 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:38:37.985731 | orchestrator | 2026-04-05 00:38:37.985743 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-04-05 00:38:37.985756 | orchestrator | Sunday 05 April 2026 00:38:32 +0000 (0:00:00.108) 0:00:03.044 ********** 2026-04-05 00:38:37.985768 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:38:37.985780 | orchestrator | 2026-04-05 00:38:37.985792 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-04-05 00:38:37.985804 | orchestrator | Sunday 05 April 2026 00:38:33 +0000 (0:00:01.039) 0:00:04.084 ********** 2026-04-05 00:38:37.985816 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:38:37.985829 | orchestrator | 2026-04-05 00:38:37.985842 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-04-05 00:38:37.985854 | orchestrator | 2026-04-05 00:38:37.985867 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-04-05 00:38:37.985879 | orchestrator | Sunday 05 April 2026 00:38:33 +0000 (0:00:00.110) 0:00:04.194 ********** 2026-04-05 00:38:37.985891 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:38:37.985903 | orchestrator | 2026-04-05 00:38:37.985916 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-04-05 00:38:37.985929 | orchestrator | Sunday 05 April 2026 00:38:34 +0000 (0:00:00.112) 0:00:04.307 ********** 2026-04-05 00:38:37.985941 | orchestrator | changed: [testbed-node-3] 2026-04-05 00:38:37.985954 | orchestrator | 2026-04-05 00:38:37.985965 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-04-05 00:38:37.985976 | orchestrator | Sunday 05 April 2026 00:38:35 +0000 (0:00:01.022) 0:00:05.330 ********** 2026-04-05 00:38:37.985987 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:38:37.985998 | orchestrator | 2026-04-05 00:38:37.986008 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-04-05 00:38:37.986076 | orchestrator | 2026-04-05 00:38:37.986089 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-04-05 00:38:37.986101 | orchestrator | Sunday 05 April 2026 00:38:35 +0000 (0:00:00.106) 0:00:05.437 ********** 2026-04-05 00:38:37.986113 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:38:37.986124 | orchestrator | 2026-04-05 00:38:37.986135 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-04-05 00:38:37.986181 | orchestrator | Sunday 05 April 2026 00:38:35 +0000 (0:00:00.215) 0:00:05.652 ********** 2026-04-05 00:38:37.986192 | orchestrator | changed: [testbed-node-4] 2026-04-05 00:38:37.986203 | orchestrator | 2026-04-05 00:38:37.986214 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-04-05 00:38:37.986245 | orchestrator | Sunday 05 April 2026 00:38:36 +0000 (0:00:01.000) 0:00:06.653 ********** 2026-04-05 00:38:37.986256 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:38:37.986267 | orchestrator | 2026-04-05 00:38:37.986278 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-04-05 00:38:37.986289 | orchestrator | 2026-04-05 00:38:37.986300 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-04-05 00:38:37.986310 | orchestrator | Sunday 05 April 2026 00:38:36 +0000 (0:00:00.120) 0:00:06.774 ********** 2026-04-05 00:38:37.986321 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:38:37.986332 | orchestrator | 2026-04-05 00:38:37.986343 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-04-05 00:38:37.986354 | orchestrator | Sunday 05 April 2026 00:38:36 +0000 (0:00:00.116) 0:00:06.891 ********** 2026-04-05 00:38:37.986365 | orchestrator | changed: [testbed-node-5] 2026-04-05 00:38:37.986376 | orchestrator | 2026-04-05 00:38:37.986398 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-04-05 00:38:37.986409 | orchestrator | Sunday 05 April 2026 00:38:37 +0000 (0:00:01.027) 0:00:07.919 ********** 2026-04-05 00:38:37.986439 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:38:37.986450 | orchestrator | 2026-04-05 00:38:37.986461 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-05 00:38:37.986474 | orchestrator | testbed-node-0 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-05 00:38:37.986486 | orchestrator | testbed-node-1 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-05 00:38:37.986504 | orchestrator | testbed-node-2 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-05 00:38:37.986516 | orchestrator | testbed-node-3 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-05 00:38:37.986526 | orchestrator | testbed-node-4 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-05 00:38:37.986538 | orchestrator | testbed-node-5 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-05 00:38:37.986548 | orchestrator | 2026-04-05 00:38:37.986559 | orchestrator | 2026-04-05 00:38:37.986570 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-05 00:38:37.986582 | orchestrator | Sunday 05 April 2026 00:38:37 +0000 (0:00:00.041) 0:00:07.961 ********** 2026-04-05 00:38:37.986592 | orchestrator | =============================================================================== 2026-04-05 00:38:37.986603 | orchestrator | Reboot system - do not wait for the reboot to complete ------------------ 6.33s 2026-04-05 00:38:37.986614 | orchestrator | Exit playbook, if user did not mean to reboot systems ------------------- 0.78s 2026-04-05 00:38:37.986625 | orchestrator | Reboot system - wait for the reboot to complete ------------------------- 0.62s 2026-04-05 00:38:38.162539 | orchestrator | + osism apply wait-for-connection -l testbed-nodes -e ireallymeanit=yes 2026-04-05 00:38:49.556682 | orchestrator | 2026-04-05 00:38:49 | INFO  | Prepare task for execution of wait-for-connection. 2026-04-05 00:38:49.631338 | orchestrator | 2026-04-05 00:38:49 | INFO  | Task 67a50e34-cfc8-41b7-ac80-7d3e711f5802 (wait-for-connection) was prepared for execution. 2026-04-05 00:38:49.631436 | orchestrator | 2026-04-05 00:38:49 | INFO  | It takes a moment until task 67a50e34-cfc8-41b7-ac80-7d3e711f5802 (wait-for-connection) has been started and output is visible here. 2026-04-05 00:39:04.715071 | orchestrator | 2026-04-05 00:39:04.715186 | orchestrator | PLAY [Wait until remote systems are reachable] ********************************* 2026-04-05 00:39:04.715202 | orchestrator | 2026-04-05 00:39:04.715212 | orchestrator | TASK [Wait until remote system is reachable] *********************************** 2026-04-05 00:39:04.715221 | orchestrator | Sunday 05 April 2026 00:38:52 +0000 (0:00:00.330) 0:00:00.330 ********** 2026-04-05 00:39:04.715230 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:39:04.715240 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:39:04.715296 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:39:04.715307 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:39:04.715317 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:39:04.715326 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:39:04.715335 | orchestrator | 2026-04-05 00:39:04.715345 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-05 00:39:04.715355 | orchestrator | testbed-node-0 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-05 00:39:04.715366 | orchestrator | testbed-node-1 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-05 00:39:04.715400 | orchestrator | testbed-node-2 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-05 00:39:04.715410 | orchestrator | testbed-node-3 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-05 00:39:04.715418 | orchestrator | testbed-node-4 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-05 00:39:04.715427 | orchestrator | testbed-node-5 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-05 00:39:04.715436 | orchestrator | 2026-04-05 00:39:04.715445 | orchestrator | 2026-04-05 00:39:04.715454 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-05 00:39:04.715462 | orchestrator | Sunday 05 April 2026 00:39:04 +0000 (0:00:11.505) 0:00:11.835 ********** 2026-04-05 00:39:04.715471 | orchestrator | =============================================================================== 2026-04-05 00:39:04.715480 | orchestrator | Wait until remote system is reachable ---------------------------------- 11.51s 2026-04-05 00:39:04.906905 | orchestrator | + osism apply hddtemp 2026-04-05 00:39:16.262579 | orchestrator | 2026-04-05 00:39:16 | INFO  | Prepare task for execution of hddtemp. 2026-04-05 00:39:16.341020 | orchestrator | 2026-04-05 00:39:16 | INFO  | Task 23f90fd1-0c7a-4e32-b8f0-85b2bb091ded (hddtemp) was prepared for execution. 2026-04-05 00:39:16.341111 | orchestrator | 2026-04-05 00:39:16 | INFO  | It takes a moment until task 23f90fd1-0c7a-4e32-b8f0-85b2bb091ded (hddtemp) has been started and output is visible here. 2026-04-05 00:39:43.250427 | orchestrator | 2026-04-05 00:39:43.250545 | orchestrator | PLAY [Apply role hddtemp] ****************************************************** 2026-04-05 00:39:43.250563 | orchestrator | 2026-04-05 00:39:43.250575 | orchestrator | TASK [osism.services.hddtemp : Gather variables for each operating system] ***** 2026-04-05 00:39:43.250586 | orchestrator | Sunday 05 April 2026 00:39:19 +0000 (0:00:00.352) 0:00:00.352 ********** 2026-04-05 00:39:43.250598 | orchestrator | ok: [testbed-manager] 2026-04-05 00:39:43.250610 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:39:43.250621 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:39:43.250632 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:39:43.250643 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:39:43.250670 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:39:43.250681 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:39:43.250692 | orchestrator | 2026-04-05 00:39:43.250703 | orchestrator | TASK [osism.services.hddtemp : Include distribution specific install tasks] **** 2026-04-05 00:39:43.250714 | orchestrator | Sunday 05 April 2026 00:39:20 +0000 (0:00:00.625) 0:00:00.978 ********** 2026-04-05 00:39:43.250727 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/hddtemp/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-05 00:39:43.250741 | orchestrator | 2026-04-05 00:39:43.250752 | orchestrator | TASK [osism.services.hddtemp : Remove hddtemp package] ************************* 2026-04-05 00:39:43.250763 | orchestrator | Sunday 05 April 2026 00:39:21 +0000 (0:00:01.204) 0:00:02.182 ********** 2026-04-05 00:39:43.250774 | orchestrator | ok: [testbed-manager] 2026-04-05 00:39:43.250784 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:39:43.250795 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:39:43.250806 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:39:43.250817 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:39:43.250827 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:39:43.250838 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:39:43.250849 | orchestrator | 2026-04-05 00:39:43.250860 | orchestrator | TASK [osism.services.hddtemp : Enable Kernel Module drivetemp] ***************** 2026-04-05 00:39:43.250871 | orchestrator | Sunday 05 April 2026 00:39:24 +0000 (0:00:02.473) 0:00:04.655 ********** 2026-04-05 00:39:43.250881 | orchestrator | changed: [testbed-manager] 2026-04-05 00:39:43.250916 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:39:43.250929 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:39:43.250943 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:39:43.250955 | orchestrator | changed: [testbed-node-3] 2026-04-05 00:39:43.250968 | orchestrator | changed: [testbed-node-4] 2026-04-05 00:39:43.250980 | orchestrator | changed: [testbed-node-5] 2026-04-05 00:39:43.250992 | orchestrator | 2026-04-05 00:39:43.251005 | orchestrator | TASK [osism.services.hddtemp : Check if drivetemp module is available] ********* 2026-04-05 00:39:43.251017 | orchestrator | Sunday 05 April 2026 00:39:25 +0000 (0:00:00.962) 0:00:05.618 ********** 2026-04-05 00:39:43.251030 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:39:43.251043 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:39:43.251055 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:39:43.251068 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:39:43.251081 | orchestrator | ok: [testbed-manager] 2026-04-05 00:39:43.251092 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:39:43.251103 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:39:43.251114 | orchestrator | 2026-04-05 00:39:43.251125 | orchestrator | TASK [osism.services.hddtemp : Load Kernel Module drivetemp] ******************* 2026-04-05 00:39:43.251136 | orchestrator | Sunday 05 April 2026 00:39:26 +0000 (0:00:01.295) 0:00:06.913 ********** 2026-04-05 00:39:43.251147 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:39:43.251158 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:39:43.251168 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:39:43.251179 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:39:43.251190 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:39:43.251201 | orchestrator | changed: [testbed-manager] 2026-04-05 00:39:43.251212 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:39:43.251222 | orchestrator | 2026-04-05 00:39:43.251233 | orchestrator | TASK [osism.services.hddtemp : Install lm-sensors] ***************************** 2026-04-05 00:39:43.251244 | orchestrator | Sunday 05 April 2026 00:39:27 +0000 (0:00:00.669) 0:00:07.583 ********** 2026-04-05 00:39:43.251255 | orchestrator | changed: [testbed-manager] 2026-04-05 00:39:43.251266 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:39:43.251277 | orchestrator | changed: [testbed-node-4] 2026-04-05 00:39:43.251312 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:39:43.251323 | orchestrator | changed: [testbed-node-3] 2026-04-05 00:39:43.251335 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:39:43.251346 | orchestrator | changed: [testbed-node-5] 2026-04-05 00:39:43.251357 | orchestrator | 2026-04-05 00:39:43.251368 | orchestrator | TASK [osism.services.hddtemp : Include distribution specific service tasks] **** 2026-04-05 00:39:43.251379 | orchestrator | Sunday 05 April 2026 00:39:39 +0000 (0:00:12.725) 0:00:20.309 ********** 2026-04-05 00:39:43.251391 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/hddtemp/tasks/service-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-05 00:39:43.251402 | orchestrator | 2026-04-05 00:39:43.251413 | orchestrator | TASK [osism.services.hddtemp : Manage lm-sensors service] ********************** 2026-04-05 00:39:43.251424 | orchestrator | Sunday 05 April 2026 00:39:41 +0000 (0:00:01.267) 0:00:21.576 ********** 2026-04-05 00:39:43.251435 | orchestrator | changed: [testbed-manager] 2026-04-05 00:39:43.251446 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:39:43.251457 | orchestrator | changed: [testbed-node-3] 2026-04-05 00:39:43.251468 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:39:43.251479 | orchestrator | changed: [testbed-node-4] 2026-04-05 00:39:43.251490 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:39:43.251500 | orchestrator | changed: [testbed-node-5] 2026-04-05 00:39:43.251511 | orchestrator | 2026-04-05 00:39:43.251522 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-05 00:39:43.251534 | orchestrator | testbed-manager : ok=9  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-05 00:39:43.251573 | orchestrator | testbed-node-0 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-05 00:39:43.251586 | orchestrator | testbed-node-1 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-05 00:39:43.251597 | orchestrator | testbed-node-2 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-05 00:39:43.251613 | orchestrator | testbed-node-3 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-05 00:39:43.251624 | orchestrator | testbed-node-4 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-05 00:39:43.251635 | orchestrator | testbed-node-5 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-05 00:39:43.251646 | orchestrator | 2026-04-05 00:39:43.251657 | orchestrator | 2026-04-05 00:39:43.251668 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-05 00:39:43.251679 | orchestrator | Sunday 05 April 2026 00:39:42 +0000 (0:00:01.904) 0:00:23.481 ********** 2026-04-05 00:39:43.251690 | orchestrator | =============================================================================== 2026-04-05 00:39:43.251701 | orchestrator | osism.services.hddtemp : Install lm-sensors ---------------------------- 12.73s 2026-04-05 00:39:43.251712 | orchestrator | osism.services.hddtemp : Remove hddtemp package ------------------------- 2.47s 2026-04-05 00:39:43.251723 | orchestrator | osism.services.hddtemp : Manage lm-sensors service ---------------------- 1.91s 2026-04-05 00:39:43.251733 | orchestrator | osism.services.hddtemp : Check if drivetemp module is available --------- 1.30s 2026-04-05 00:39:43.251744 | orchestrator | osism.services.hddtemp : Include distribution specific service tasks ---- 1.27s 2026-04-05 00:39:43.251755 | orchestrator | osism.services.hddtemp : Include distribution specific install tasks ---- 1.20s 2026-04-05 00:39:43.251766 | orchestrator | osism.services.hddtemp : Enable Kernel Module drivetemp ----------------- 0.96s 2026-04-05 00:39:43.251777 | orchestrator | osism.services.hddtemp : Load Kernel Module drivetemp ------------------- 0.67s 2026-04-05 00:39:43.251787 | orchestrator | osism.services.hddtemp : Gather variables for each operating system ----- 0.63s 2026-04-05 00:39:43.457713 | orchestrator | ++ semver latest 7.1.1 2026-04-05 00:39:43.503945 | orchestrator | + [[ -1 -ge 0 ]] 2026-04-05 00:39:43.504063 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2026-04-05 00:39:43.504087 | orchestrator | + sudo systemctl restart manager.service 2026-04-05 00:39:57.233485 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2026-04-05 00:39:57.233618 | orchestrator | + wait_for_container_healthy 60 ceph-ansible 2026-04-05 00:39:57.233646 | orchestrator | + local max_attempts=60 2026-04-05 00:39:57.233667 | orchestrator | + local name=ceph-ansible 2026-04-05 00:39:57.233686 | orchestrator | + local attempt_num=1 2026-04-05 00:39:57.233705 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-04-05 00:39:57.266806 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-04-05 00:39:57.266930 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-04-05 00:39:57.266958 | orchestrator | + sleep 5 2026-04-05 00:40:02.272662 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-04-05 00:40:02.307457 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-04-05 00:40:02.307538 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-04-05 00:40:02.307548 | orchestrator | + sleep 5 2026-04-05 00:40:07.310913 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-04-05 00:40:07.350974 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-04-05 00:40:07.351059 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-04-05 00:40:07.351082 | orchestrator | + sleep 5 2026-04-05 00:40:12.355595 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-04-05 00:40:12.390898 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-04-05 00:40:12.390992 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-04-05 00:40:12.391038 | orchestrator | + sleep 5 2026-04-05 00:40:17.395502 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-04-05 00:40:17.428964 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-04-05 00:40:17.429082 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-04-05 00:40:17.429105 | orchestrator | + sleep 5 2026-04-05 00:40:22.434208 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-04-05 00:40:22.472526 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-04-05 00:40:22.472618 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-04-05 00:40:22.472633 | orchestrator | + sleep 5 2026-04-05 00:40:27.478948 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-04-05 00:40:27.517988 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-04-05 00:40:27.518122 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-04-05 00:40:27.518139 | orchestrator | + sleep 5 2026-04-05 00:40:32.524040 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-04-05 00:40:32.568458 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-04-05 00:40:32.568529 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-04-05 00:40:32.568874 | orchestrator | + sleep 5 2026-04-05 00:40:37.573039 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-04-05 00:40:37.612754 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-04-05 00:40:37.612887 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-04-05 00:40:37.612912 | orchestrator | + sleep 5 2026-04-05 00:40:42.617283 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-04-05 00:40:42.656709 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-04-05 00:40:42.656813 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-04-05 00:40:42.656830 | orchestrator | + sleep 5 2026-04-05 00:40:47.661814 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-04-05 00:40:47.699943 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-04-05 00:40:47.700035 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-04-05 00:40:47.700049 | orchestrator | + sleep 5 2026-04-05 00:40:52.704957 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-04-05 00:40:52.744289 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-04-05 00:40:52.744438 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-04-05 00:40:52.744457 | orchestrator | + sleep 5 2026-04-05 00:40:57.749549 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-04-05 00:40:57.784851 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-04-05 00:40:57.784923 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-04-05 00:40:57.784931 | orchestrator | + sleep 5 2026-04-05 00:41:02.791134 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-04-05 00:41:02.826478 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-04-05 00:41:02.826601 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2026-04-05 00:41:02.826618 | orchestrator | + local max_attempts=60 2026-04-05 00:41:02.826630 | orchestrator | + local name=kolla-ansible 2026-04-05 00:41:02.826655 | orchestrator | + local attempt_num=1 2026-04-05 00:41:02.827148 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2026-04-05 00:41:02.854186 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-04-05 00:41:02.854285 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2026-04-05 00:41:02.854302 | orchestrator | + local max_attempts=60 2026-04-05 00:41:02.854315 | orchestrator | + local name=osism-ansible 2026-04-05 00:41:02.854326 | orchestrator | + local attempt_num=1 2026-04-05 00:41:02.855048 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2026-04-05 00:41:02.884089 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-04-05 00:41:02.884175 | orchestrator | + [[ true == \t\r\u\e ]] 2026-04-05 00:41:02.884189 | orchestrator | + sh -c /opt/configuration/scripts/disable-ara.sh 2026-04-05 00:41:03.054148 | orchestrator | ARA in ceph-ansible already disabled. 2026-04-05 00:41:03.215529 | orchestrator | ARA in kolla-ansible already disabled. 2026-04-05 00:41:03.370586 | orchestrator | ARA in osism-ansible already disabled. 2026-04-05 00:41:03.547404 | orchestrator | ARA in osism-kubernetes already disabled. 2026-04-05 00:41:03.547504 | orchestrator | + osism apply gather-facts 2026-04-05 00:41:14.940566 | orchestrator | 2026-04-05 00:41:14 | INFO  | Prepare task for execution of gather-facts. 2026-04-05 00:41:15.041676 | orchestrator | 2026-04-05 00:41:15 | INFO  | Task 5cf1ed64-88d3-4bc1-ae3c-806e925d12a1 (gather-facts) was prepared for execution. 2026-04-05 00:41:15.041812 | orchestrator | 2026-04-05 00:41:15 | INFO  | It takes a moment until task 5cf1ed64-88d3-4bc1-ae3c-806e925d12a1 (gather-facts) has been started and output is visible here. 2026-04-05 00:41:26.312058 | orchestrator | 2026-04-05 00:41:26.312201 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-04-05 00:41:26.312229 | orchestrator | 2026-04-05 00:41:26.312249 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-04-05 00:41:26.312268 | orchestrator | Sunday 05 April 2026 00:41:18 +0000 (0:00:00.319) 0:00:00.319 ********** 2026-04-05 00:41:26.312287 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:41:26.312307 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:41:26.312328 | orchestrator | ok: [testbed-manager] 2026-04-05 00:41:26.312347 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:41:26.312366 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:41:26.312476 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:41:26.312497 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:41:26.312518 | orchestrator | 2026-04-05 00:41:26.312537 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2026-04-05 00:41:26.312556 | orchestrator | 2026-04-05 00:41:26.312576 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2026-04-05 00:41:26.312596 | orchestrator | Sunday 05 April 2026 00:41:25 +0000 (0:00:06.820) 0:00:07.139 ********** 2026-04-05 00:41:26.312615 | orchestrator | skipping: [testbed-manager] 2026-04-05 00:41:26.312636 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:41:26.312656 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:41:26.312679 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:41:26.312700 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:41:26.312719 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:41:26.312738 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:41:26.312757 | orchestrator | 2026-04-05 00:41:26.312776 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-05 00:41:26.312820 | orchestrator | testbed-manager : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-05 00:41:26.312843 | orchestrator | testbed-node-0 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-05 00:41:26.312863 | orchestrator | testbed-node-1 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-05 00:41:26.312876 | orchestrator | testbed-node-2 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-05 00:41:26.312887 | orchestrator | testbed-node-3 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-05 00:41:26.312898 | orchestrator | testbed-node-4 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-05 00:41:26.312909 | orchestrator | testbed-node-5 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-05 00:41:26.312920 | orchestrator | 2026-04-05 00:41:26.312931 | orchestrator | 2026-04-05 00:41:26.312941 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-05 00:41:26.312952 | orchestrator | Sunday 05 April 2026 00:41:26 +0000 (0:00:00.648) 0:00:07.788 ********** 2026-04-05 00:41:26.312963 | orchestrator | =============================================================================== 2026-04-05 00:41:26.312974 | orchestrator | Gathers facts about hosts ----------------------------------------------- 6.82s 2026-04-05 00:41:26.312984 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.65s 2026-04-05 00:41:26.537853 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/001-helpers.sh /usr/local/bin/deploy-helper 2026-04-05 00:41:26.554345 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/100-ceph-with-ansible.sh /usr/local/bin/deploy-ceph-with-ansible 2026-04-05 00:41:26.575010 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/100-ceph-with-rook.sh /usr/local/bin/deploy-ceph-with-rook 2026-04-05 00:41:26.597728 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/200-infrastructure.sh /usr/local/bin/deploy-infrastructure 2026-04-05 00:41:26.609635 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/300-openstack.sh /usr/local/bin/deploy-openstack 2026-04-05 00:41:26.626262 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/320-openstack-minimal.sh /usr/local/bin/deploy-openstack-minimal 2026-04-05 00:41:26.637790 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/400-monitoring.sh /usr/local/bin/deploy-monitoring 2026-04-05 00:41:26.649813 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/500-kubernetes.sh /usr/local/bin/deploy-kubernetes 2026-04-05 00:41:26.669034 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/510-clusterapi.sh /usr/local/bin/deploy-kubernetes-clusterapi 2026-04-05 00:41:26.688641 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade-manager.sh /usr/local/bin/upgrade-manager 2026-04-05 00:41:26.706312 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/100-ceph-with-ansible.sh /usr/local/bin/upgrade-ceph-with-ansible 2026-04-05 00:41:26.728616 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/100-ceph-with-rook.sh /usr/local/bin/upgrade-ceph-with-rook 2026-04-05 00:41:26.749181 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/200-infrastructure.sh /usr/local/bin/upgrade-infrastructure 2026-04-05 00:41:26.769488 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/300-openstack.sh /usr/local/bin/upgrade-openstack 2026-04-05 00:41:26.791896 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/320-openstack-minimal.sh /usr/local/bin/upgrade-openstack-minimal 2026-04-05 00:41:26.814939 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/400-monitoring.sh /usr/local/bin/upgrade-monitoring 2026-04-05 00:41:26.835455 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/500-kubernetes.sh /usr/local/bin/upgrade-kubernetes 2026-04-05 00:41:26.853822 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/510-clusterapi.sh /usr/local/bin/upgrade-kubernetes-clusterapi 2026-04-05 00:41:26.876214 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/300-openstack.sh /usr/local/bin/bootstrap-openstack 2026-04-05 00:41:26.894493 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/301-openstack-octavia-amhpora-image.sh /usr/local/bin/bootstrap-octavia 2026-04-05 00:41:26.908755 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/302-openstack-k8s-clusterapi-images.sh /usr/local/bin/bootstrap-clusterapi 2026-04-05 00:41:26.921251 | orchestrator | + sudo ln -sf /opt/configuration/scripts/disable-local-registry.sh /usr/local/bin/disable-local-registry 2026-04-05 00:41:26.944413 | orchestrator | + sudo ln -sf /opt/configuration/scripts/pull-images.sh /usr/local/bin/pull-images 2026-04-05 00:41:26.965815 | orchestrator | + [[ false == \t\r\u\e ]] 2026-04-05 00:41:27.083070 | orchestrator | ok: Runtime: 0:24:16.990881 2026-04-05 00:41:27.177966 | 2026-04-05 00:41:27.178104 | TASK [Deploy services] 2026-04-05 00:41:27.713113 | orchestrator | skipping: Conditional result was False 2026-04-05 00:41:27.730984 | 2026-04-05 00:41:27.731268 | TASK [Deploy in a nutshell] 2026-04-05 00:41:28.445239 | orchestrator | + set -e 2026-04-05 00:41:28.445504 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-04-05 00:41:28.445541 | orchestrator | ++ export INTERACTIVE=false 2026-04-05 00:41:28.445576 | orchestrator | ++ INTERACTIVE=false 2026-04-05 00:41:28.445608 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-04-05 00:41:28.445667 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-04-05 00:41:28.445723 | orchestrator | + source /opt/manager-vars.sh 2026-04-05 00:41:28.445784 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-04-05 00:41:28.445825 | orchestrator | ++ NUMBER_OF_NODES=6 2026-04-05 00:41:28.445850 | orchestrator | ++ export CEPH_VERSION=reef 2026-04-05 00:41:28.445906 | orchestrator | ++ CEPH_VERSION=reef 2026-04-05 00:41:28.445930 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-04-05 00:41:28.445959 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-04-05 00:41:28.445987 | orchestrator | ++ export MANAGER_VERSION=latest 2026-04-05 00:41:28.446079 | orchestrator | ++ MANAGER_VERSION=latest 2026-04-05 00:41:28.446163 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-04-05 00:41:28.446181 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-04-05 00:41:28.446192 | orchestrator | ++ export ARA=false 2026-04-05 00:41:28.446204 | orchestrator | ++ ARA=false 2026-04-05 00:41:28.446215 | orchestrator | ++ export DEPLOY_MODE=manager 2026-04-05 00:41:28.446226 | orchestrator | ++ DEPLOY_MODE=manager 2026-04-05 00:41:28.446247 | orchestrator | ++ export TEMPEST=true 2026-04-05 00:41:28.446258 | orchestrator | ++ TEMPEST=true 2026-04-05 00:41:28.446269 | orchestrator | ++ export IS_ZUUL=true 2026-04-05 00:41:28.446279 | orchestrator | ++ IS_ZUUL=true 2026-04-05 00:41:28.446290 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.221 2026-04-05 00:41:28.446302 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.221 2026-04-05 00:41:28.446313 | orchestrator | ++ export EXTERNAL_API=false 2026-04-05 00:41:28.446446 | orchestrator | ++ EXTERNAL_API=false 2026-04-05 00:41:28.446486 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-04-05 00:41:28.446499 | orchestrator | ++ IMAGE_USER=ubuntu 2026-04-05 00:41:28.446510 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-04-05 00:41:28.446521 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-04-05 00:41:28.446532 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-04-05 00:41:28.446543 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-04-05 00:41:28.446559 | orchestrator | + echo 2026-04-05 00:41:28.447500 | orchestrator | 2026-04-05 00:41:28.447656 | orchestrator | # PULL IMAGES 2026-04-05 00:41:28.447675 | orchestrator | 2026-04-05 00:41:28.447699 | orchestrator | + echo '# PULL IMAGES' 2026-04-05 00:41:28.447713 | orchestrator | + echo 2026-04-05 00:41:28.448694 | orchestrator | ++ semver latest 7.0.0 2026-04-05 00:41:28.497512 | orchestrator | + [[ -1 -ge 0 ]] 2026-04-05 00:41:28.497608 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2026-04-05 00:41:28.497645 | orchestrator | + osism apply --no-wait -r 2 -e custom pull-images 2026-04-05 00:41:29.844307 | orchestrator | 2026-04-05 00:41:29 | INFO  | Trying to run play pull-images in environment custom 2026-04-05 00:41:39.979700 | orchestrator | 2026-04-05 00:41:39 | INFO  | Prepare task for execution of pull-images. 2026-04-05 00:41:40.062005 | orchestrator | 2026-04-05 00:41:40 | INFO  | Task f351e2b2-5d58-4a21-8f0e-6a9f04605d4c (pull-images) was prepared for execution. 2026-04-05 00:41:40.062169 | orchestrator | 2026-04-05 00:41:40 | INFO  | Task f351e2b2-5d58-4a21-8f0e-6a9f04605d4c is running in background. No more output. Check ARA for logs. 2026-04-05 00:41:41.687813 | orchestrator | 2026-04-05 00:41:41 | INFO  | Trying to run play wipe-partitions in environment custom 2026-04-05 00:41:51.751192 | orchestrator | 2026-04-05 00:41:51 | INFO  | Prepare task for execution of wipe-partitions. 2026-04-05 00:41:51.896340 | orchestrator | 2026-04-05 00:41:51 | INFO  | Task 3ff5ebb5-e6bb-486c-93f1-dd977291127d (wipe-partitions) was prepared for execution. 2026-04-05 00:41:51.896512 | orchestrator | 2026-04-05 00:41:51 | INFO  | It takes a moment until task 3ff5ebb5-e6bb-486c-93f1-dd977291127d (wipe-partitions) has been started and output is visible here. 2026-04-05 00:42:03.285772 | orchestrator | 2026-04-05 00:42:03.285882 | orchestrator | PLAY [Wipe partitions] ********************************************************* 2026-04-05 00:42:03.285898 | orchestrator | 2026-04-05 00:42:03.285910 | orchestrator | TASK [Find all logical devices owned by UID 167] ******************************* 2026-04-05 00:42:03.285930 | orchestrator | Sunday 05 April 2026 00:41:54 +0000 (0:00:00.147) 0:00:00.147 ********** 2026-04-05 00:42:03.285993 | orchestrator | changed: [testbed-node-3] 2026-04-05 00:42:03.286010 | orchestrator | changed: [testbed-node-4] 2026-04-05 00:42:03.286105 | orchestrator | changed: [testbed-node-5] 2026-04-05 00:42:03.286127 | orchestrator | 2026-04-05 00:42:03.286146 | orchestrator | TASK [Remove all rook related logical devices] ********************************* 2026-04-05 00:42:03.286163 | orchestrator | Sunday 05 April 2026 00:41:55 +0000 (0:00:00.922) 0:00:01.070 ********** 2026-04-05 00:42:03.286185 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:42:03.286203 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:42:03.286222 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:42:03.286240 | orchestrator | 2026-04-05 00:42:03.286258 | orchestrator | TASK [Find all logical devices with prefix ceph] ******************************* 2026-04-05 00:42:03.286277 | orchestrator | Sunday 05 April 2026 00:41:56 +0000 (0:00:00.288) 0:00:01.359 ********** 2026-04-05 00:42:03.286296 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:42:03.286316 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:42:03.286337 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:42:03.286356 | orchestrator | 2026-04-05 00:42:03.286374 | orchestrator | TASK [Remove all ceph related logical devices] ********************************* 2026-04-05 00:42:03.286423 | orchestrator | Sunday 05 April 2026 00:41:56 +0000 (0:00:00.537) 0:00:01.897 ********** 2026-04-05 00:42:03.286441 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:42:03.286459 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:42:03.286477 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:42:03.286495 | orchestrator | 2026-04-05 00:42:03.286515 | orchestrator | TASK [Check device availability] *********************************************** 2026-04-05 00:42:03.286533 | orchestrator | Sunday 05 April 2026 00:41:56 +0000 (0:00:00.237) 0:00:02.134 ********** 2026-04-05 00:42:03.286552 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdb) 2026-04-05 00:42:03.286572 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdb) 2026-04-05 00:42:03.286583 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdc) 2026-04-05 00:42:03.286594 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdb) 2026-04-05 00:42:03.286605 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdd) 2026-04-05 00:42:03.286615 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdc) 2026-04-05 00:42:03.286626 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdc) 2026-04-05 00:42:03.286637 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdd) 2026-04-05 00:42:03.286647 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdd) 2026-04-05 00:42:03.286659 | orchestrator | 2026-04-05 00:42:03.286670 | orchestrator | TASK [Wipe partitions with wipefs] ********************************************* 2026-04-05 00:42:03.286681 | orchestrator | Sunday 05 April 2026 00:41:58 +0000 (0:00:01.356) 0:00:03.491 ********** 2026-04-05 00:42:03.286692 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdb) 2026-04-05 00:42:03.286703 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdb) 2026-04-05 00:42:03.286713 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdb) 2026-04-05 00:42:03.286724 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdc) 2026-04-05 00:42:03.286734 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdc) 2026-04-05 00:42:03.286745 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdc) 2026-04-05 00:42:03.286756 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdd) 2026-04-05 00:42:03.286766 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdd) 2026-04-05 00:42:03.286777 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdd) 2026-04-05 00:42:03.286787 | orchestrator | 2026-04-05 00:42:03.286806 | orchestrator | TASK [Overwrite first 32M with zeros] ****************************************** 2026-04-05 00:42:03.286817 | orchestrator | Sunday 05 April 2026 00:41:59 +0000 (0:00:01.355) 0:00:04.847 ********** 2026-04-05 00:42:03.286828 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdb) 2026-04-05 00:42:03.286838 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdb) 2026-04-05 00:42:03.286849 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdb) 2026-04-05 00:42:03.286860 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdc) 2026-04-05 00:42:03.286908 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdc) 2026-04-05 00:42:03.286919 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdc) 2026-04-05 00:42:03.286930 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdd) 2026-04-05 00:42:03.286941 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdd) 2026-04-05 00:42:03.286951 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdd) 2026-04-05 00:42:03.286962 | orchestrator | 2026-04-05 00:42:03.286973 | orchestrator | TASK [Reload udev rules] ******************************************************* 2026-04-05 00:42:03.286984 | orchestrator | Sunday 05 April 2026 00:42:01 +0000 (0:00:02.114) 0:00:06.961 ********** 2026-04-05 00:42:03.286995 | orchestrator | changed: [testbed-node-3] 2026-04-05 00:42:03.287005 | orchestrator | changed: [testbed-node-4] 2026-04-05 00:42:03.287016 | orchestrator | changed: [testbed-node-5] 2026-04-05 00:42:03.287027 | orchestrator | 2026-04-05 00:42:03.287037 | orchestrator | TASK [Request device events from the kernel] *********************************** 2026-04-05 00:42:03.287048 | orchestrator | Sunday 05 April 2026 00:42:02 +0000 (0:00:00.579) 0:00:07.540 ********** 2026-04-05 00:42:03.287059 | orchestrator | changed: [testbed-node-3] 2026-04-05 00:42:03.287070 | orchestrator | changed: [testbed-node-4] 2026-04-05 00:42:03.287081 | orchestrator | changed: [testbed-node-5] 2026-04-05 00:42:03.287093 | orchestrator | 2026-04-05 00:42:03.287104 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-05 00:42:03.287117 | orchestrator | testbed-node-3 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-05 00:42:03.287128 | orchestrator | testbed-node-4 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-05 00:42:03.287162 | orchestrator | testbed-node-5 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-05 00:42:03.287174 | orchestrator | 2026-04-05 00:42:03.287185 | orchestrator | 2026-04-05 00:42:03.287196 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-05 00:42:03.287206 | orchestrator | Sunday 05 April 2026 00:42:03 +0000 (0:00:00.769) 0:00:08.310 ********** 2026-04-05 00:42:03.287217 | orchestrator | =============================================================================== 2026-04-05 00:42:03.287228 | orchestrator | Overwrite first 32M with zeros ------------------------------------------ 2.11s 2026-04-05 00:42:03.287238 | orchestrator | Check device availability ----------------------------------------------- 1.36s 2026-04-05 00:42:03.287250 | orchestrator | Wipe partitions with wipefs --------------------------------------------- 1.36s 2026-04-05 00:42:03.287261 | orchestrator | Find all logical devices owned by UID 167 ------------------------------- 0.92s 2026-04-05 00:42:03.287271 | orchestrator | Request device events from the kernel ----------------------------------- 0.77s 2026-04-05 00:42:03.287282 | orchestrator | Reload udev rules ------------------------------------------------------- 0.58s 2026-04-05 00:42:03.287293 | orchestrator | Find all logical devices with prefix ceph ------------------------------- 0.54s 2026-04-05 00:42:03.287303 | orchestrator | Remove all rook related logical devices --------------------------------- 0.29s 2026-04-05 00:42:03.287314 | orchestrator | Remove all ceph related logical devices --------------------------------- 0.24s 2026-04-05 00:42:14.805062 | orchestrator | 2026-04-05 00:42:14 | INFO  | Prepare task for execution of facts. 2026-04-05 00:42:14.887014 | orchestrator | 2026-04-05 00:42:14 | INFO  | Task 80611e53-9451-41ab-96e8-6b3de08af13d (facts) was prepared for execution. 2026-04-05 00:42:14.887113 | orchestrator | 2026-04-05 00:42:14 | INFO  | It takes a moment until task 80611e53-9451-41ab-96e8-6b3de08af13d (facts) has been started and output is visible here. 2026-04-05 00:42:27.334388 | orchestrator | 2026-04-05 00:42:27.334531 | orchestrator | PLAY [Apply role facts] ******************************************************** 2026-04-05 00:42:27.334555 | orchestrator | 2026-04-05 00:42:27.334597 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2026-04-05 00:42:27.334616 | orchestrator | Sunday 05 April 2026 00:42:18 +0000 (0:00:00.355) 0:00:00.355 ********** 2026-04-05 00:42:27.334626 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:42:27.334637 | orchestrator | ok: [testbed-manager] 2026-04-05 00:42:27.334646 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:42:27.334656 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:42:27.334665 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:42:27.334675 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:42:27.334684 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:42:27.334693 | orchestrator | 2026-04-05 00:42:27.334703 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2026-04-05 00:42:27.334713 | orchestrator | Sunday 05 April 2026 00:42:19 +0000 (0:00:01.338) 0:00:01.693 ********** 2026-04-05 00:42:27.334730 | orchestrator | skipping: [testbed-manager] 2026-04-05 00:42:27.334754 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:42:27.334772 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:42:27.334787 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:42:27.334803 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:42:27.334818 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:42:27.334834 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:42:27.334847 | orchestrator | 2026-04-05 00:42:27.334862 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-04-05 00:42:27.334892 | orchestrator | 2026-04-05 00:42:27.334909 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-04-05 00:42:27.334926 | orchestrator | Sunday 05 April 2026 00:42:20 +0000 (0:00:01.218) 0:00:02.911 ********** 2026-04-05 00:42:27.334941 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:42:27.334957 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:42:27.334972 | orchestrator | ok: [testbed-manager] 2026-04-05 00:42:27.334988 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:42:27.335006 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:42:27.335024 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:42:27.335041 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:42:27.335055 | orchestrator | 2026-04-05 00:42:27.335066 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2026-04-05 00:42:27.335078 | orchestrator | 2026-04-05 00:42:27.335088 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2026-04-05 00:42:27.335099 | orchestrator | Sunday 05 April 2026 00:42:26 +0000 (0:00:05.690) 0:00:08.602 ********** 2026-04-05 00:42:27.335110 | orchestrator | skipping: [testbed-manager] 2026-04-05 00:42:27.335122 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:42:27.335133 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:42:27.335145 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:42:27.335155 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:42:27.335166 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:42:27.335177 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:42:27.335189 | orchestrator | 2026-04-05 00:42:27.335200 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-05 00:42:27.335212 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-05 00:42:27.335225 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-05 00:42:27.335237 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-05 00:42:27.335248 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-05 00:42:27.335260 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-05 00:42:27.335281 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-05 00:42:27.335291 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-05 00:42:27.335300 | orchestrator | 2026-04-05 00:42:27.335310 | orchestrator | 2026-04-05 00:42:27.335319 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-05 00:42:27.335329 | orchestrator | Sunday 05 April 2026 00:42:27 +0000 (0:00:00.469) 0:00:09.071 ********** 2026-04-05 00:42:27.335338 | orchestrator | =============================================================================== 2026-04-05 00:42:27.335348 | orchestrator | Gathers facts about hosts ----------------------------------------------- 5.69s 2026-04-05 00:42:27.335358 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.34s 2026-04-05 00:42:27.335367 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.22s 2026-04-05 00:42:27.335377 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.47s 2026-04-05 00:42:28.625802 | orchestrator | 2026-04-05 00:42:28 | INFO  | Prepare task for execution of ceph-configure-lvm-volumes. 2026-04-05 00:42:28.681885 | orchestrator | 2026-04-05 00:42:28 | INFO  | Task 006ffc86-7855-4c68-9d29-24ce74626551 (ceph-configure-lvm-volumes) was prepared for execution. 2026-04-05 00:42:28.681942 | orchestrator | 2026-04-05 00:42:28 | INFO  | It takes a moment until task 006ffc86-7855-4c68-9d29-24ce74626551 (ceph-configure-lvm-volumes) has been started and output is visible here. 2026-04-05 00:42:40.473171 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-04-05 00:42:40.473274 | orchestrator | 2.16.14 2026-04-05 00:42:40.473290 | orchestrator | 2026-04-05 00:42:40.473301 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2026-04-05 00:42:40.473313 | orchestrator | 2026-04-05 00:42:40.473323 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-04-05 00:42:40.473333 | orchestrator | Sunday 05 April 2026 00:42:33 +0000 (0:00:00.293) 0:00:00.293 ********** 2026-04-05 00:42:40.473343 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-04-05 00:42:40.473353 | orchestrator | 2026-04-05 00:42:40.473363 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-04-05 00:42:40.473372 | orchestrator | Sunday 05 April 2026 00:42:33 +0000 (0:00:00.232) 0:00:00.526 ********** 2026-04-05 00:42:40.473382 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:42:40.473393 | orchestrator | 2026-04-05 00:42:40.473402 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-05 00:42:40.473412 | orchestrator | Sunday 05 April 2026 00:42:33 +0000 (0:00:00.212) 0:00:00.739 ********** 2026-04-05 00:42:40.473508 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop0) 2026-04-05 00:42:40.473522 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop1) 2026-04-05 00:42:40.473532 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop2) 2026-04-05 00:42:40.473542 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop3) 2026-04-05 00:42:40.473551 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop4) 2026-04-05 00:42:40.473561 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop5) 2026-04-05 00:42:40.473570 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop6) 2026-04-05 00:42:40.473580 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop7) 2026-04-05 00:42:40.473589 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sda) 2026-04-05 00:42:40.473599 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdb) 2026-04-05 00:42:40.473629 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdc) 2026-04-05 00:42:40.473640 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdd) 2026-04-05 00:42:40.473649 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sr0) 2026-04-05 00:42:40.473658 | orchestrator | 2026-04-05 00:42:40.473668 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-05 00:42:40.473677 | orchestrator | Sunday 05 April 2026 00:42:33 +0000 (0:00:00.378) 0:00:01.117 ********** 2026-04-05 00:42:40.473687 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:42:40.473698 | orchestrator | 2026-04-05 00:42:40.473709 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-05 00:42:40.473720 | orchestrator | Sunday 05 April 2026 00:42:34 +0000 (0:00:00.486) 0:00:01.604 ********** 2026-04-05 00:42:40.473732 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:42:40.473743 | orchestrator | 2026-04-05 00:42:40.473754 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-05 00:42:40.473770 | orchestrator | Sunday 05 April 2026 00:42:34 +0000 (0:00:00.192) 0:00:01.796 ********** 2026-04-05 00:42:40.473782 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:42:40.473793 | orchestrator | 2026-04-05 00:42:40.473804 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-05 00:42:40.473815 | orchestrator | Sunday 05 April 2026 00:42:34 +0000 (0:00:00.187) 0:00:01.983 ********** 2026-04-05 00:42:40.473827 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:42:40.473839 | orchestrator | 2026-04-05 00:42:40.473848 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-05 00:42:40.473858 | orchestrator | Sunday 05 April 2026 00:42:34 +0000 (0:00:00.190) 0:00:02.174 ********** 2026-04-05 00:42:40.473868 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:42:40.473877 | orchestrator | 2026-04-05 00:42:40.473887 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-05 00:42:40.473896 | orchestrator | Sunday 05 April 2026 00:42:35 +0000 (0:00:00.193) 0:00:02.367 ********** 2026-04-05 00:42:40.473906 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:42:40.473915 | orchestrator | 2026-04-05 00:42:40.473925 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-05 00:42:40.473934 | orchestrator | Sunday 05 April 2026 00:42:35 +0000 (0:00:00.193) 0:00:02.561 ********** 2026-04-05 00:42:40.473944 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:42:40.473954 | orchestrator | 2026-04-05 00:42:40.473963 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-05 00:42:40.473973 | orchestrator | Sunday 05 April 2026 00:42:35 +0000 (0:00:00.205) 0:00:02.766 ********** 2026-04-05 00:42:40.473982 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:42:40.473992 | orchestrator | 2026-04-05 00:42:40.474001 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-05 00:42:40.474011 | orchestrator | Sunday 05 April 2026 00:42:35 +0000 (0:00:00.205) 0:00:02.972 ********** 2026-04-05 00:42:40.474075 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_6cf61d5b-3b59-4520-9cc2-8285b407910f) 2026-04-05 00:42:40.474086 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_6cf61d5b-3b59-4520-9cc2-8285b407910f) 2026-04-05 00:42:40.474096 | orchestrator | 2026-04-05 00:42:40.474106 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-05 00:42:40.474132 | orchestrator | Sunday 05 April 2026 00:42:36 +0000 (0:00:00.418) 0:00:03.391 ********** 2026-04-05 00:42:40.474142 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_7e73ac44-76fe-4853-8c7e-76a35261b68e) 2026-04-05 00:42:40.474152 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_7e73ac44-76fe-4853-8c7e-76a35261b68e) 2026-04-05 00:42:40.474161 | orchestrator | 2026-04-05 00:42:40.474177 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-05 00:42:40.474195 | orchestrator | Sunday 05 April 2026 00:42:36 +0000 (0:00:00.396) 0:00:03.787 ********** 2026-04-05 00:42:40.474205 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_98068efd-febf-4a3d-a208-2ec8969defa3) 2026-04-05 00:42:40.474214 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_98068efd-febf-4a3d-a208-2ec8969defa3) 2026-04-05 00:42:40.474224 | orchestrator | 2026-04-05 00:42:40.474234 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-05 00:42:40.474244 | orchestrator | Sunday 05 April 2026 00:42:37 +0000 (0:00:00.639) 0:00:04.427 ********** 2026-04-05 00:42:40.474253 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_89f7f52a-567c-4cab-9983-76602271fa86) 2026-04-05 00:42:40.474263 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_89f7f52a-567c-4cab-9983-76602271fa86) 2026-04-05 00:42:40.474272 | orchestrator | 2026-04-05 00:42:40.474282 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-05 00:42:40.474291 | orchestrator | Sunday 05 April 2026 00:42:37 +0000 (0:00:00.646) 0:00:05.073 ********** 2026-04-05 00:42:40.474301 | orchestrator | ok: [testbed-node-3] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-04-05 00:42:40.474310 | orchestrator | 2026-04-05 00:42:40.474320 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-05 00:42:40.474330 | orchestrator | Sunday 05 April 2026 00:42:38 +0000 (0:00:00.762) 0:00:05.836 ********** 2026-04-05 00:42:40.474339 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop0) 2026-04-05 00:42:40.474349 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop1) 2026-04-05 00:42:40.474358 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop2) 2026-04-05 00:42:40.474368 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop3) 2026-04-05 00:42:40.474378 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop4) 2026-04-05 00:42:40.474387 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop5) 2026-04-05 00:42:40.474397 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop6) 2026-04-05 00:42:40.474406 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop7) 2026-04-05 00:42:40.474437 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sda) 2026-04-05 00:42:40.474449 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdb) 2026-04-05 00:42:40.474459 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdc) 2026-04-05 00:42:40.474468 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdd) 2026-04-05 00:42:40.474478 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sr0) 2026-04-05 00:42:40.474487 | orchestrator | 2026-04-05 00:42:40.474497 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-05 00:42:40.474507 | orchestrator | Sunday 05 April 2026 00:42:39 +0000 (0:00:00.409) 0:00:06.246 ********** 2026-04-05 00:42:40.474516 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:42:40.474526 | orchestrator | 2026-04-05 00:42:40.474535 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-05 00:42:40.474545 | orchestrator | Sunday 05 April 2026 00:42:39 +0000 (0:00:00.186) 0:00:06.432 ********** 2026-04-05 00:42:40.474554 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:42:40.474564 | orchestrator | 2026-04-05 00:42:40.474573 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-05 00:42:40.474583 | orchestrator | Sunday 05 April 2026 00:42:39 +0000 (0:00:00.208) 0:00:06.641 ********** 2026-04-05 00:42:40.474592 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:42:40.474608 | orchestrator | 2026-04-05 00:42:40.474617 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-05 00:42:40.474627 | orchestrator | Sunday 05 April 2026 00:42:39 +0000 (0:00:00.201) 0:00:06.843 ********** 2026-04-05 00:42:40.474636 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:42:40.474646 | orchestrator | 2026-04-05 00:42:40.474655 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-05 00:42:40.474665 | orchestrator | Sunday 05 April 2026 00:42:39 +0000 (0:00:00.187) 0:00:07.030 ********** 2026-04-05 00:42:40.474674 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:42:40.474684 | orchestrator | 2026-04-05 00:42:40.474693 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-05 00:42:40.474703 | orchestrator | Sunday 05 April 2026 00:42:40 +0000 (0:00:00.198) 0:00:07.228 ********** 2026-04-05 00:42:40.474712 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:42:40.474722 | orchestrator | 2026-04-05 00:42:40.474731 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-05 00:42:40.474741 | orchestrator | Sunday 05 April 2026 00:42:40 +0000 (0:00:00.208) 0:00:07.437 ********** 2026-04-05 00:42:40.474751 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:42:40.474760 | orchestrator | 2026-04-05 00:42:40.474776 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-05 00:42:48.256127 | orchestrator | Sunday 05 April 2026 00:42:40 +0000 (0:00:00.201) 0:00:07.639 ********** 2026-04-05 00:42:48.256206 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:42:48.256213 | orchestrator | 2026-04-05 00:42:48.256218 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-05 00:42:48.256222 | orchestrator | Sunday 05 April 2026 00:42:40 +0000 (0:00:00.179) 0:00:07.819 ********** 2026-04-05 00:42:48.256227 | orchestrator | ok: [testbed-node-3] => (item=sda1) 2026-04-05 00:42:48.256231 | orchestrator | ok: [testbed-node-3] => (item=sda14) 2026-04-05 00:42:48.256236 | orchestrator | ok: [testbed-node-3] => (item=sda15) 2026-04-05 00:42:48.256240 | orchestrator | ok: [testbed-node-3] => (item=sda16) 2026-04-05 00:42:48.256244 | orchestrator | 2026-04-05 00:42:48.256248 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-05 00:42:48.256265 | orchestrator | Sunday 05 April 2026 00:42:41 +0000 (0:00:00.986) 0:00:08.805 ********** 2026-04-05 00:42:48.256269 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:42:48.256273 | orchestrator | 2026-04-05 00:42:48.256277 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-05 00:42:48.256281 | orchestrator | Sunday 05 April 2026 00:42:41 +0000 (0:00:00.192) 0:00:08.998 ********** 2026-04-05 00:42:48.256285 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:42:48.256289 | orchestrator | 2026-04-05 00:42:48.256292 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-05 00:42:48.256296 | orchestrator | Sunday 05 April 2026 00:42:42 +0000 (0:00:00.184) 0:00:09.182 ********** 2026-04-05 00:42:48.256300 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:42:48.256304 | orchestrator | 2026-04-05 00:42:48.256308 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-05 00:42:48.256312 | orchestrator | Sunday 05 April 2026 00:42:42 +0000 (0:00:00.202) 0:00:09.385 ********** 2026-04-05 00:42:48.256316 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:42:48.256319 | orchestrator | 2026-04-05 00:42:48.256323 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2026-04-05 00:42:48.256327 | orchestrator | Sunday 05 April 2026 00:42:42 +0000 (0:00:00.221) 0:00:09.607 ********** 2026-04-05 00:42:48.256331 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': None}) 2026-04-05 00:42:48.256335 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': None}) 2026-04-05 00:42:48.256339 | orchestrator | 2026-04-05 00:42:48.256343 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2026-04-05 00:42:48.256347 | orchestrator | Sunday 05 April 2026 00:42:42 +0000 (0:00:00.187) 0:00:09.794 ********** 2026-04-05 00:42:48.256367 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:42:48.256371 | orchestrator | 2026-04-05 00:42:48.256375 | orchestrator | TASK [Generate DB VG names] **************************************************** 2026-04-05 00:42:48.256379 | orchestrator | Sunday 05 April 2026 00:42:42 +0000 (0:00:00.135) 0:00:09.929 ********** 2026-04-05 00:42:48.256383 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:42:48.256387 | orchestrator | 2026-04-05 00:42:48.256390 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2026-04-05 00:42:48.256394 | orchestrator | Sunday 05 April 2026 00:42:42 +0000 (0:00:00.128) 0:00:10.058 ********** 2026-04-05 00:42:48.256398 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:42:48.256402 | orchestrator | 2026-04-05 00:42:48.256406 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2026-04-05 00:42:48.256409 | orchestrator | Sunday 05 April 2026 00:42:43 +0000 (0:00:00.172) 0:00:10.230 ********** 2026-04-05 00:42:48.256413 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:42:48.256417 | orchestrator | 2026-04-05 00:42:48.256477 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2026-04-05 00:42:48.256484 | orchestrator | Sunday 05 April 2026 00:42:43 +0000 (0:00:00.146) 0:00:10.377 ********** 2026-04-05 00:42:48.256492 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '9657aa76-f30a-575f-81fa-dc230eadde03'}}) 2026-04-05 00:42:48.256498 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '8a27db0d-e52c-5340-bfad-66c075ab1c61'}}) 2026-04-05 00:42:48.256504 | orchestrator | 2026-04-05 00:42:48.256509 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2026-04-05 00:42:48.256515 | orchestrator | Sunday 05 April 2026 00:42:43 +0000 (0:00:00.184) 0:00:10.562 ********** 2026-04-05 00:42:48.256522 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '9657aa76-f30a-575f-81fa-dc230eadde03'}})  2026-04-05 00:42:48.256534 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '8a27db0d-e52c-5340-bfad-66c075ab1c61'}})  2026-04-05 00:42:48.256545 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:42:48.256551 | orchestrator | 2026-04-05 00:42:48.256557 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2026-04-05 00:42:48.256563 | orchestrator | Sunday 05 April 2026 00:42:43 +0000 (0:00:00.143) 0:00:10.706 ********** 2026-04-05 00:42:48.256569 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '9657aa76-f30a-575f-81fa-dc230eadde03'}})  2026-04-05 00:42:48.256575 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '8a27db0d-e52c-5340-bfad-66c075ab1c61'}})  2026-04-05 00:42:48.256582 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:42:48.256588 | orchestrator | 2026-04-05 00:42:48.256594 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2026-04-05 00:42:48.256600 | orchestrator | Sunday 05 April 2026 00:42:43 +0000 (0:00:00.340) 0:00:11.047 ********** 2026-04-05 00:42:48.256606 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '9657aa76-f30a-575f-81fa-dc230eadde03'}})  2026-04-05 00:42:48.256626 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '8a27db0d-e52c-5340-bfad-66c075ab1c61'}})  2026-04-05 00:42:48.256632 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:42:48.256638 | orchestrator | 2026-04-05 00:42:48.256644 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2026-04-05 00:42:48.256651 | orchestrator | Sunday 05 April 2026 00:42:44 +0000 (0:00:00.153) 0:00:11.201 ********** 2026-04-05 00:42:48.256657 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:42:48.256663 | orchestrator | 2026-04-05 00:42:48.256670 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2026-04-05 00:42:48.256676 | orchestrator | Sunday 05 April 2026 00:42:44 +0000 (0:00:00.159) 0:00:11.360 ********** 2026-04-05 00:42:48.256682 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:42:48.256695 | orchestrator | 2026-04-05 00:42:48.256703 | orchestrator | TASK [Set DB devices config data] ********************************************** 2026-04-05 00:42:48.256709 | orchestrator | Sunday 05 April 2026 00:42:44 +0000 (0:00:00.142) 0:00:11.503 ********** 2026-04-05 00:42:48.256716 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:42:48.256722 | orchestrator | 2026-04-05 00:42:48.256729 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2026-04-05 00:42:48.256736 | orchestrator | Sunday 05 April 2026 00:42:44 +0000 (0:00:00.140) 0:00:11.643 ********** 2026-04-05 00:42:48.256742 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:42:48.256750 | orchestrator | 2026-04-05 00:42:48.256755 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2026-04-05 00:42:48.256761 | orchestrator | Sunday 05 April 2026 00:42:44 +0000 (0:00:00.123) 0:00:11.766 ********** 2026-04-05 00:42:48.256767 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:42:48.256774 | orchestrator | 2026-04-05 00:42:48.256780 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2026-04-05 00:42:48.256787 | orchestrator | Sunday 05 April 2026 00:42:44 +0000 (0:00:00.134) 0:00:11.901 ********** 2026-04-05 00:42:48.256794 | orchestrator | ok: [testbed-node-3] => { 2026-04-05 00:42:48.256800 | orchestrator |  "ceph_osd_devices": { 2026-04-05 00:42:48.256807 | orchestrator |  "sdb": { 2026-04-05 00:42:48.256814 | orchestrator |  "osd_lvm_uuid": "9657aa76-f30a-575f-81fa-dc230eadde03" 2026-04-05 00:42:48.256820 | orchestrator |  }, 2026-04-05 00:42:48.256827 | orchestrator |  "sdc": { 2026-04-05 00:42:48.256834 | orchestrator |  "osd_lvm_uuid": "8a27db0d-e52c-5340-bfad-66c075ab1c61" 2026-04-05 00:42:48.256841 | orchestrator |  } 2026-04-05 00:42:48.256846 | orchestrator |  } 2026-04-05 00:42:48.256852 | orchestrator | } 2026-04-05 00:42:48.256858 | orchestrator | 2026-04-05 00:42:48.256865 | orchestrator | TASK [Print WAL devices] ******************************************************* 2026-04-05 00:42:48.256871 | orchestrator | Sunday 05 April 2026 00:42:44 +0000 (0:00:00.152) 0:00:12.054 ********** 2026-04-05 00:42:48.256878 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:42:48.256884 | orchestrator | 2026-04-05 00:42:48.256891 | orchestrator | TASK [Print DB devices] ******************************************************** 2026-04-05 00:42:48.256898 | orchestrator | Sunday 05 April 2026 00:42:45 +0000 (0:00:00.135) 0:00:12.189 ********** 2026-04-05 00:42:48.256904 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:42:48.256910 | orchestrator | 2026-04-05 00:42:48.256917 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2026-04-05 00:42:48.256924 | orchestrator | Sunday 05 April 2026 00:42:45 +0000 (0:00:00.134) 0:00:12.324 ********** 2026-04-05 00:42:48.256930 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:42:48.256937 | orchestrator | 2026-04-05 00:42:48.256943 | orchestrator | TASK [Print configuration data] ************************************************ 2026-04-05 00:42:48.256949 | orchestrator | Sunday 05 April 2026 00:42:45 +0000 (0:00:00.151) 0:00:12.475 ********** 2026-04-05 00:42:48.256956 | orchestrator | changed: [testbed-node-3] => { 2026-04-05 00:42:48.256962 | orchestrator |  "_ceph_configure_lvm_config_data": { 2026-04-05 00:42:48.256969 | orchestrator |  "ceph_osd_devices": { 2026-04-05 00:42:48.256976 | orchestrator |  "sdb": { 2026-04-05 00:42:48.256982 | orchestrator |  "osd_lvm_uuid": "9657aa76-f30a-575f-81fa-dc230eadde03" 2026-04-05 00:42:48.256989 | orchestrator |  }, 2026-04-05 00:42:48.256995 | orchestrator |  "sdc": { 2026-04-05 00:42:48.257002 | orchestrator |  "osd_lvm_uuid": "8a27db0d-e52c-5340-bfad-66c075ab1c61" 2026-04-05 00:42:48.257008 | orchestrator |  } 2026-04-05 00:42:48.257014 | orchestrator |  }, 2026-04-05 00:42:48.257021 | orchestrator |  "lvm_volumes": [ 2026-04-05 00:42:48.257028 | orchestrator |  { 2026-04-05 00:42:48.257034 | orchestrator |  "data": "osd-block-9657aa76-f30a-575f-81fa-dc230eadde03", 2026-04-05 00:42:48.257040 | orchestrator |  "data_vg": "ceph-9657aa76-f30a-575f-81fa-dc230eadde03" 2026-04-05 00:42:48.257052 | orchestrator |  }, 2026-04-05 00:42:48.257059 | orchestrator |  { 2026-04-05 00:42:48.257065 | orchestrator |  "data": "osd-block-8a27db0d-e52c-5340-bfad-66c075ab1c61", 2026-04-05 00:42:48.257072 | orchestrator |  "data_vg": "ceph-8a27db0d-e52c-5340-bfad-66c075ab1c61" 2026-04-05 00:42:48.257078 | orchestrator |  } 2026-04-05 00:42:48.257084 | orchestrator |  ] 2026-04-05 00:42:48.257091 | orchestrator |  } 2026-04-05 00:42:48.257097 | orchestrator | } 2026-04-05 00:42:48.257103 | orchestrator | 2026-04-05 00:42:48.257110 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2026-04-05 00:42:48.257116 | orchestrator | Sunday 05 April 2026 00:42:45 +0000 (0:00:00.195) 0:00:12.671 ********** 2026-04-05 00:42:48.257122 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-04-05 00:42:48.257128 | orchestrator | 2026-04-05 00:42:48.257134 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2026-04-05 00:42:48.257141 | orchestrator | 2026-04-05 00:42:48.257148 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-04-05 00:42:48.257154 | orchestrator | Sunday 05 April 2026 00:42:47 +0000 (0:00:02.278) 0:00:14.950 ********** 2026-04-05 00:42:48.257160 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2026-04-05 00:42:48.257166 | orchestrator | 2026-04-05 00:42:48.257173 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-04-05 00:42:48.257179 | orchestrator | Sunday 05 April 2026 00:42:48 +0000 (0:00:00.251) 0:00:15.201 ********** 2026-04-05 00:42:48.257185 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:42:48.257191 | orchestrator | 2026-04-05 00:42:48.257202 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-05 00:42:55.924550 | orchestrator | Sunday 05 April 2026 00:42:48 +0000 (0:00:00.222) 0:00:15.423 ********** 2026-04-05 00:42:55.924659 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop0) 2026-04-05 00:42:55.924676 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop1) 2026-04-05 00:42:55.924687 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop2) 2026-04-05 00:42:55.924699 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop3) 2026-04-05 00:42:55.924710 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop4) 2026-04-05 00:42:55.924721 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop5) 2026-04-05 00:42:55.924732 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop6) 2026-04-05 00:42:55.924748 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop7) 2026-04-05 00:42:55.924759 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sda) 2026-04-05 00:42:55.924771 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdb) 2026-04-05 00:42:55.924782 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdc) 2026-04-05 00:42:55.924793 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdd) 2026-04-05 00:42:55.924819 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sr0) 2026-04-05 00:42:55.924831 | orchestrator | 2026-04-05 00:42:55.924843 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-05 00:42:55.924855 | orchestrator | Sunday 05 April 2026 00:42:48 +0000 (0:00:00.396) 0:00:15.820 ********** 2026-04-05 00:42:55.924866 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:42:55.924877 | orchestrator | 2026-04-05 00:42:55.924888 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-05 00:42:55.924899 | orchestrator | Sunday 05 April 2026 00:42:48 +0000 (0:00:00.213) 0:00:16.033 ********** 2026-04-05 00:42:55.924931 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:42:55.924942 | orchestrator | 2026-04-05 00:42:55.924953 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-05 00:42:55.924964 | orchestrator | Sunday 05 April 2026 00:42:49 +0000 (0:00:00.202) 0:00:16.236 ********** 2026-04-05 00:42:55.924975 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:42:55.924985 | orchestrator | 2026-04-05 00:42:55.924996 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-05 00:42:55.925007 | orchestrator | Sunday 05 April 2026 00:42:49 +0000 (0:00:00.199) 0:00:16.435 ********** 2026-04-05 00:42:55.925018 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:42:55.925028 | orchestrator | 2026-04-05 00:42:55.925039 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-05 00:42:55.925051 | orchestrator | Sunday 05 April 2026 00:42:49 +0000 (0:00:00.175) 0:00:16.611 ********** 2026-04-05 00:42:55.925064 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:42:55.925076 | orchestrator | 2026-04-05 00:42:55.925089 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-05 00:42:55.925102 | orchestrator | Sunday 05 April 2026 00:42:50 +0000 (0:00:00.642) 0:00:17.253 ********** 2026-04-05 00:42:55.925114 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:42:55.925127 | orchestrator | 2026-04-05 00:42:55.925140 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-05 00:42:55.925153 | orchestrator | Sunday 05 April 2026 00:42:50 +0000 (0:00:00.199) 0:00:17.453 ********** 2026-04-05 00:42:55.925165 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:42:55.925177 | orchestrator | 2026-04-05 00:42:55.925190 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-05 00:42:55.925203 | orchestrator | Sunday 05 April 2026 00:42:50 +0000 (0:00:00.197) 0:00:17.650 ********** 2026-04-05 00:42:55.925216 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:42:55.925229 | orchestrator | 2026-04-05 00:42:55.925241 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-05 00:42:55.925253 | orchestrator | Sunday 05 April 2026 00:42:50 +0000 (0:00:00.197) 0:00:17.848 ********** 2026-04-05 00:42:55.925266 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_60b18cd9-cde4-4a47-bd2f-2a39c218ea3e) 2026-04-05 00:42:55.925279 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_60b18cd9-cde4-4a47-bd2f-2a39c218ea3e) 2026-04-05 00:42:55.925291 | orchestrator | 2026-04-05 00:42:55.925304 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-05 00:42:55.925317 | orchestrator | Sunday 05 April 2026 00:42:51 +0000 (0:00:00.494) 0:00:18.342 ********** 2026-04-05 00:42:55.925330 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_cd3e0233-fa53-4a76-8124-17084efe5189) 2026-04-05 00:42:55.925343 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_cd3e0233-fa53-4a76-8124-17084efe5189) 2026-04-05 00:42:55.925356 | orchestrator | 2026-04-05 00:42:55.925369 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-05 00:42:55.925382 | orchestrator | Sunday 05 April 2026 00:42:51 +0000 (0:00:00.408) 0:00:18.751 ********** 2026-04-05 00:42:55.925394 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_38b6e962-bf0a-4437-92be-df56b43fc17a) 2026-04-05 00:42:55.925407 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_38b6e962-bf0a-4437-92be-df56b43fc17a) 2026-04-05 00:42:55.925418 | orchestrator | 2026-04-05 00:42:55.925455 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-05 00:42:55.925495 | orchestrator | Sunday 05 April 2026 00:42:51 +0000 (0:00:00.416) 0:00:19.167 ********** 2026-04-05 00:42:55.925514 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_ca139ca2-9428-4862-b2c5-b387113f92e8) 2026-04-05 00:42:55.925525 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_ca139ca2-9428-4862-b2c5-b387113f92e8) 2026-04-05 00:42:55.925536 | orchestrator | 2026-04-05 00:42:55.925556 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-05 00:42:55.925567 | orchestrator | Sunday 05 April 2026 00:42:52 +0000 (0:00:00.406) 0:00:19.574 ********** 2026-04-05 00:42:55.925578 | orchestrator | ok: [testbed-node-4] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-04-05 00:42:55.925589 | orchestrator | 2026-04-05 00:42:55.925600 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-05 00:42:55.925610 | orchestrator | Sunday 05 April 2026 00:42:52 +0000 (0:00:00.318) 0:00:19.892 ********** 2026-04-05 00:42:55.925621 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop0) 2026-04-05 00:42:55.925632 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop1) 2026-04-05 00:42:55.925650 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop2) 2026-04-05 00:42:55.925661 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop3) 2026-04-05 00:42:55.925672 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop4) 2026-04-05 00:42:55.925683 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop5) 2026-04-05 00:42:55.925693 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop6) 2026-04-05 00:42:55.925704 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop7) 2026-04-05 00:42:55.925715 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sda) 2026-04-05 00:42:55.925725 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdb) 2026-04-05 00:42:55.925736 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdc) 2026-04-05 00:42:55.925746 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdd) 2026-04-05 00:42:55.925757 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sr0) 2026-04-05 00:42:55.925768 | orchestrator | 2026-04-05 00:42:55.925779 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-05 00:42:55.925789 | orchestrator | Sunday 05 April 2026 00:42:53 +0000 (0:00:00.495) 0:00:20.388 ********** 2026-04-05 00:42:55.925800 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:42:55.925811 | orchestrator | 2026-04-05 00:42:55.925821 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-05 00:42:55.925832 | orchestrator | Sunday 05 April 2026 00:42:53 +0000 (0:00:00.228) 0:00:20.616 ********** 2026-04-05 00:42:55.925843 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:42:55.925853 | orchestrator | 2026-04-05 00:42:55.925864 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-05 00:42:55.925875 | orchestrator | Sunday 05 April 2026 00:42:54 +0000 (0:00:00.668) 0:00:21.284 ********** 2026-04-05 00:42:55.925886 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:42:55.925896 | orchestrator | 2026-04-05 00:42:55.925907 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-05 00:42:55.925918 | orchestrator | Sunday 05 April 2026 00:42:54 +0000 (0:00:00.206) 0:00:21.491 ********** 2026-04-05 00:42:55.925928 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:42:55.925939 | orchestrator | 2026-04-05 00:42:55.925950 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-05 00:42:55.925960 | orchestrator | Sunday 05 April 2026 00:42:54 +0000 (0:00:00.180) 0:00:21.672 ********** 2026-04-05 00:42:55.925971 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:42:55.925981 | orchestrator | 2026-04-05 00:42:55.925992 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-05 00:42:55.926003 | orchestrator | Sunday 05 April 2026 00:42:54 +0000 (0:00:00.189) 0:00:21.861 ********** 2026-04-05 00:42:55.926064 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:42:55.926085 | orchestrator | 2026-04-05 00:42:55.926096 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-05 00:42:55.926107 | orchestrator | Sunday 05 April 2026 00:42:54 +0000 (0:00:00.191) 0:00:22.052 ********** 2026-04-05 00:42:55.926117 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:42:55.926128 | orchestrator | 2026-04-05 00:42:55.926168 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-05 00:42:55.926180 | orchestrator | Sunday 05 April 2026 00:42:55 +0000 (0:00:00.177) 0:00:22.230 ********** 2026-04-05 00:42:55.926191 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:42:55.926201 | orchestrator | 2026-04-05 00:42:55.926212 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-05 00:42:55.926223 | orchestrator | Sunday 05 April 2026 00:42:55 +0000 (0:00:00.155) 0:00:22.386 ********** 2026-04-05 00:42:55.926233 | orchestrator | ok: [testbed-node-4] => (item=sda1) 2026-04-05 00:42:55.926245 | orchestrator | ok: [testbed-node-4] => (item=sda14) 2026-04-05 00:42:55.926255 | orchestrator | ok: [testbed-node-4] => (item=sda15) 2026-04-05 00:42:55.926266 | orchestrator | ok: [testbed-node-4] => (item=sda16) 2026-04-05 00:42:55.926277 | orchestrator | 2026-04-05 00:42:55.926287 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-05 00:42:55.926298 | orchestrator | Sunday 05 April 2026 00:42:55 +0000 (0:00:00.601) 0:00:22.988 ********** 2026-04-05 00:42:55.926309 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:43:02.485238 | orchestrator | 2026-04-05 00:43:02.485350 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-05 00:43:02.485368 | orchestrator | Sunday 05 April 2026 00:42:55 +0000 (0:00:00.184) 0:00:23.173 ********** 2026-04-05 00:43:02.485381 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:43:02.485393 | orchestrator | 2026-04-05 00:43:02.485404 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-05 00:43:02.485416 | orchestrator | Sunday 05 April 2026 00:42:56 +0000 (0:00:00.166) 0:00:23.339 ********** 2026-04-05 00:43:02.485473 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:43:02.485485 | orchestrator | 2026-04-05 00:43:02.485497 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-05 00:43:02.485508 | orchestrator | Sunday 05 April 2026 00:42:56 +0000 (0:00:00.182) 0:00:23.522 ********** 2026-04-05 00:43:02.485519 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:43:02.485530 | orchestrator | 2026-04-05 00:43:02.485541 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2026-04-05 00:43:02.485552 | orchestrator | Sunday 05 April 2026 00:42:56 +0000 (0:00:00.178) 0:00:23.700 ********** 2026-04-05 00:43:02.485563 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': None}) 2026-04-05 00:43:02.485575 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': None}) 2026-04-05 00:43:02.485586 | orchestrator | 2026-04-05 00:43:02.485597 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2026-04-05 00:43:02.485627 | orchestrator | Sunday 05 April 2026 00:42:56 +0000 (0:00:00.305) 0:00:24.006 ********** 2026-04-05 00:43:02.485638 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:43:02.485650 | orchestrator | 2026-04-05 00:43:02.485661 | orchestrator | TASK [Generate DB VG names] **************************************************** 2026-04-05 00:43:02.485672 | orchestrator | Sunday 05 April 2026 00:42:56 +0000 (0:00:00.127) 0:00:24.133 ********** 2026-04-05 00:43:02.485683 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:43:02.485694 | orchestrator | 2026-04-05 00:43:02.485705 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2026-04-05 00:43:02.485721 | orchestrator | Sunday 05 April 2026 00:42:57 +0000 (0:00:00.126) 0:00:24.260 ********** 2026-04-05 00:43:02.485732 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:43:02.485744 | orchestrator | 2026-04-05 00:43:02.485759 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2026-04-05 00:43:02.485773 | orchestrator | Sunday 05 April 2026 00:42:57 +0000 (0:00:00.146) 0:00:24.406 ********** 2026-04-05 00:43:02.485810 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:43:02.485825 | orchestrator | 2026-04-05 00:43:02.485838 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2026-04-05 00:43:02.485851 | orchestrator | Sunday 05 April 2026 00:42:57 +0000 (0:00:00.112) 0:00:24.519 ********** 2026-04-05 00:43:02.485866 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '84662fb7-c7ec-5f43-83c1-849532919194'}}) 2026-04-05 00:43:02.485880 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'df39e39b-9449-5ecb-9afa-151663e06960'}}) 2026-04-05 00:43:02.485893 | orchestrator | 2026-04-05 00:43:02.485906 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2026-04-05 00:43:02.485919 | orchestrator | Sunday 05 April 2026 00:42:57 +0000 (0:00:00.198) 0:00:24.718 ********** 2026-04-05 00:43:02.485932 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '84662fb7-c7ec-5f43-83c1-849532919194'}})  2026-04-05 00:43:02.485948 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'df39e39b-9449-5ecb-9afa-151663e06960'}})  2026-04-05 00:43:02.485960 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:43:02.485973 | orchestrator | 2026-04-05 00:43:02.485986 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2026-04-05 00:43:02.486000 | orchestrator | Sunday 05 April 2026 00:42:57 +0000 (0:00:00.152) 0:00:24.870 ********** 2026-04-05 00:43:02.486013 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '84662fb7-c7ec-5f43-83c1-849532919194'}})  2026-04-05 00:43:02.486099 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'df39e39b-9449-5ecb-9afa-151663e06960'}})  2026-04-05 00:43:02.486121 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:43:02.486140 | orchestrator | 2026-04-05 00:43:02.486161 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2026-04-05 00:43:02.486181 | orchestrator | Sunday 05 April 2026 00:42:57 +0000 (0:00:00.135) 0:00:25.005 ********** 2026-04-05 00:43:02.486192 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '84662fb7-c7ec-5f43-83c1-849532919194'}})  2026-04-05 00:43:02.486203 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'df39e39b-9449-5ecb-9afa-151663e06960'}})  2026-04-05 00:43:02.486214 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:43:02.486225 | orchestrator | 2026-04-05 00:43:02.486236 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2026-04-05 00:43:02.486247 | orchestrator | Sunday 05 April 2026 00:42:57 +0000 (0:00:00.159) 0:00:25.165 ********** 2026-04-05 00:43:02.486258 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:43:02.486270 | orchestrator | 2026-04-05 00:43:02.486289 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2026-04-05 00:43:02.486308 | orchestrator | Sunday 05 April 2026 00:42:58 +0000 (0:00:00.132) 0:00:25.297 ********** 2026-04-05 00:43:02.486326 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:43:02.486338 | orchestrator | 2026-04-05 00:43:02.486348 | orchestrator | TASK [Set DB devices config data] ********************************************** 2026-04-05 00:43:02.486359 | orchestrator | Sunday 05 April 2026 00:42:58 +0000 (0:00:00.138) 0:00:25.435 ********** 2026-04-05 00:43:02.486388 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:43:02.486400 | orchestrator | 2026-04-05 00:43:02.486411 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2026-04-05 00:43:02.486422 | orchestrator | Sunday 05 April 2026 00:42:58 +0000 (0:00:00.117) 0:00:25.553 ********** 2026-04-05 00:43:02.486464 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:43:02.486476 | orchestrator | 2026-04-05 00:43:02.486487 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2026-04-05 00:43:02.486498 | orchestrator | Sunday 05 April 2026 00:42:58 +0000 (0:00:00.294) 0:00:25.847 ********** 2026-04-05 00:43:02.486509 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:43:02.486530 | orchestrator | 2026-04-05 00:43:02.486541 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2026-04-05 00:43:02.486552 | orchestrator | Sunday 05 April 2026 00:42:58 +0000 (0:00:00.115) 0:00:25.963 ********** 2026-04-05 00:43:02.486563 | orchestrator | ok: [testbed-node-4] => { 2026-04-05 00:43:02.486574 | orchestrator |  "ceph_osd_devices": { 2026-04-05 00:43:02.486585 | orchestrator |  "sdb": { 2026-04-05 00:43:02.486596 | orchestrator |  "osd_lvm_uuid": "84662fb7-c7ec-5f43-83c1-849532919194" 2026-04-05 00:43:02.486607 | orchestrator |  }, 2026-04-05 00:43:02.486618 | orchestrator |  "sdc": { 2026-04-05 00:43:02.486629 | orchestrator |  "osd_lvm_uuid": "df39e39b-9449-5ecb-9afa-151663e06960" 2026-04-05 00:43:02.486640 | orchestrator |  } 2026-04-05 00:43:02.486651 | orchestrator |  } 2026-04-05 00:43:02.486662 | orchestrator | } 2026-04-05 00:43:02.486673 | orchestrator | 2026-04-05 00:43:02.486684 | orchestrator | TASK [Print WAL devices] ******************************************************* 2026-04-05 00:43:02.486695 | orchestrator | Sunday 05 April 2026 00:42:59 +0000 (0:00:00.231) 0:00:26.194 ********** 2026-04-05 00:43:02.486706 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:43:02.486717 | orchestrator | 2026-04-05 00:43:02.486728 | orchestrator | TASK [Print DB devices] ******************************************************** 2026-04-05 00:43:02.486739 | orchestrator | Sunday 05 April 2026 00:42:59 +0000 (0:00:00.157) 0:00:26.352 ********** 2026-04-05 00:43:02.486749 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:43:02.486760 | orchestrator | 2026-04-05 00:43:02.486771 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2026-04-05 00:43:02.486782 | orchestrator | Sunday 05 April 2026 00:42:59 +0000 (0:00:00.137) 0:00:26.489 ********** 2026-04-05 00:43:02.486793 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:43:02.486804 | orchestrator | 2026-04-05 00:43:02.486815 | orchestrator | TASK [Print configuration data] ************************************************ 2026-04-05 00:43:02.486832 | orchestrator | Sunday 05 April 2026 00:42:59 +0000 (0:00:00.153) 0:00:26.643 ********** 2026-04-05 00:43:02.486843 | orchestrator | changed: [testbed-node-4] => { 2026-04-05 00:43:02.486854 | orchestrator |  "_ceph_configure_lvm_config_data": { 2026-04-05 00:43:02.486865 | orchestrator |  "ceph_osd_devices": { 2026-04-05 00:43:02.486876 | orchestrator |  "sdb": { 2026-04-05 00:43:02.486887 | orchestrator |  "osd_lvm_uuid": "84662fb7-c7ec-5f43-83c1-849532919194" 2026-04-05 00:43:02.486897 | orchestrator |  }, 2026-04-05 00:43:02.486908 | orchestrator |  "sdc": { 2026-04-05 00:43:02.486919 | orchestrator |  "osd_lvm_uuid": "df39e39b-9449-5ecb-9afa-151663e06960" 2026-04-05 00:43:02.486930 | orchestrator |  } 2026-04-05 00:43:02.486941 | orchestrator |  }, 2026-04-05 00:43:02.486951 | orchestrator |  "lvm_volumes": [ 2026-04-05 00:43:02.486962 | orchestrator |  { 2026-04-05 00:43:02.486973 | orchestrator |  "data": "osd-block-84662fb7-c7ec-5f43-83c1-849532919194", 2026-04-05 00:43:02.486984 | orchestrator |  "data_vg": "ceph-84662fb7-c7ec-5f43-83c1-849532919194" 2026-04-05 00:43:02.486995 | orchestrator |  }, 2026-04-05 00:43:02.487006 | orchestrator |  { 2026-04-05 00:43:02.487017 | orchestrator |  "data": "osd-block-df39e39b-9449-5ecb-9afa-151663e06960", 2026-04-05 00:43:02.487028 | orchestrator |  "data_vg": "ceph-df39e39b-9449-5ecb-9afa-151663e06960" 2026-04-05 00:43:02.487039 | orchestrator |  } 2026-04-05 00:43:02.487049 | orchestrator |  ] 2026-04-05 00:43:02.487060 | orchestrator |  } 2026-04-05 00:43:02.487072 | orchestrator | } 2026-04-05 00:43:02.487082 | orchestrator | 2026-04-05 00:43:02.487093 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2026-04-05 00:43:02.487104 | orchestrator | Sunday 05 April 2026 00:42:59 +0000 (0:00:00.238) 0:00:26.881 ********** 2026-04-05 00:43:02.487115 | orchestrator | changed: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2026-04-05 00:43:02.487125 | orchestrator | 2026-04-05 00:43:02.487143 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2026-04-05 00:43:02.487154 | orchestrator | 2026-04-05 00:43:02.487165 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-04-05 00:43:02.487176 | orchestrator | Sunday 05 April 2026 00:43:00 +0000 (0:00:01.242) 0:00:28.123 ********** 2026-04-05 00:43:02.487186 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2026-04-05 00:43:02.487197 | orchestrator | 2026-04-05 00:43:02.487208 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-04-05 00:43:02.487219 | orchestrator | Sunday 05 April 2026 00:43:01 +0000 (0:00:00.455) 0:00:28.579 ********** 2026-04-05 00:43:02.487229 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:43:02.487240 | orchestrator | 2026-04-05 00:43:02.487251 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-05 00:43:02.487262 | orchestrator | Sunday 05 April 2026 00:43:02 +0000 (0:00:00.771) 0:00:29.351 ********** 2026-04-05 00:43:02.487273 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop0) 2026-04-05 00:43:02.487283 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop1) 2026-04-05 00:43:02.487294 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop2) 2026-04-05 00:43:02.487305 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop3) 2026-04-05 00:43:02.487316 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop4) 2026-04-05 00:43:02.487334 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop5) 2026-04-05 00:43:10.937729 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop6) 2026-04-05 00:43:10.937819 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop7) 2026-04-05 00:43:10.937829 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sda) 2026-04-05 00:43:10.937837 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdb) 2026-04-05 00:43:10.937845 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdc) 2026-04-05 00:43:10.937852 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdd) 2026-04-05 00:43:10.937859 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sr0) 2026-04-05 00:43:10.937867 | orchestrator | 2026-04-05 00:43:10.937876 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-05 00:43:10.937885 | orchestrator | Sunday 05 April 2026 00:43:02 +0000 (0:00:00.411) 0:00:29.762 ********** 2026-04-05 00:43:10.937892 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:43:10.937900 | orchestrator | 2026-04-05 00:43:10.937908 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-05 00:43:10.937915 | orchestrator | Sunday 05 April 2026 00:43:02 +0000 (0:00:00.192) 0:00:29.955 ********** 2026-04-05 00:43:10.937922 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:43:10.937929 | orchestrator | 2026-04-05 00:43:10.937936 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-05 00:43:10.937944 | orchestrator | Sunday 05 April 2026 00:43:03 +0000 (0:00:00.240) 0:00:30.195 ********** 2026-04-05 00:43:10.937951 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:43:10.937958 | orchestrator | 2026-04-05 00:43:10.937965 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-05 00:43:10.937972 | orchestrator | Sunday 05 April 2026 00:43:03 +0000 (0:00:00.213) 0:00:30.409 ********** 2026-04-05 00:43:10.937980 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:43:10.937987 | orchestrator | 2026-04-05 00:43:10.937994 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-05 00:43:10.938001 | orchestrator | Sunday 05 April 2026 00:43:03 +0000 (0:00:00.201) 0:00:30.611 ********** 2026-04-05 00:43:10.938068 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:43:10.938078 | orchestrator | 2026-04-05 00:43:10.938086 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-05 00:43:10.938093 | orchestrator | Sunday 05 April 2026 00:43:03 +0000 (0:00:00.183) 0:00:30.794 ********** 2026-04-05 00:43:10.938100 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:43:10.938107 | orchestrator | 2026-04-05 00:43:10.938115 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-05 00:43:10.938122 | orchestrator | Sunday 05 April 2026 00:43:03 +0000 (0:00:00.184) 0:00:30.979 ********** 2026-04-05 00:43:10.938129 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:43:10.938136 | orchestrator | 2026-04-05 00:43:10.938144 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-05 00:43:10.938151 | orchestrator | Sunday 05 April 2026 00:43:04 +0000 (0:00:00.195) 0:00:31.174 ********** 2026-04-05 00:43:10.938158 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:43:10.938165 | orchestrator | 2026-04-05 00:43:10.938172 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-05 00:43:10.938179 | orchestrator | Sunday 05 April 2026 00:43:04 +0000 (0:00:00.231) 0:00:31.406 ********** 2026-04-05 00:43:10.938187 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_af4f9d3c-bb5f-4922-ae65-7a1b824f675e) 2026-04-05 00:43:10.938195 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_af4f9d3c-bb5f-4922-ae65-7a1b824f675e) 2026-04-05 00:43:10.938202 | orchestrator | 2026-04-05 00:43:10.938209 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-05 00:43:10.938216 | orchestrator | Sunday 05 April 2026 00:43:04 +0000 (0:00:00.688) 0:00:32.095 ********** 2026-04-05 00:43:10.938238 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_50c87a36-4bc6-4e8b-871c-1038d731a8f6) 2026-04-05 00:43:10.938245 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_50c87a36-4bc6-4e8b-871c-1038d731a8f6) 2026-04-05 00:43:10.938252 | orchestrator | 2026-04-05 00:43:10.938260 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-05 00:43:10.938267 | orchestrator | Sunday 05 April 2026 00:43:05 +0000 (0:00:00.874) 0:00:32.969 ********** 2026-04-05 00:43:10.938293 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_16d4ab4f-df2e-4494-9775-e59359a49379) 2026-04-05 00:43:10.938316 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_16d4ab4f-df2e-4494-9775-e59359a49379) 2026-04-05 00:43:10.938336 | orchestrator | 2026-04-05 00:43:10.938350 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-05 00:43:10.938361 | orchestrator | Sunday 05 April 2026 00:43:06 +0000 (0:00:00.416) 0:00:33.385 ********** 2026-04-05 00:43:10.938373 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_f126a5b7-c683-4f0b-86b6-1ac9b9bcd2da) 2026-04-05 00:43:10.938386 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_f126a5b7-c683-4f0b-86b6-1ac9b9bcd2da) 2026-04-05 00:43:10.938398 | orchestrator | 2026-04-05 00:43:10.938411 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-05 00:43:10.938423 | orchestrator | Sunday 05 April 2026 00:43:06 +0000 (0:00:00.449) 0:00:33.835 ********** 2026-04-05 00:43:10.938512 | orchestrator | ok: [testbed-node-5] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-04-05 00:43:10.938523 | orchestrator | 2026-04-05 00:43:10.938532 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-05 00:43:10.938559 | orchestrator | Sunday 05 April 2026 00:43:07 +0000 (0:00:00.384) 0:00:34.219 ********** 2026-04-05 00:43:10.938572 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop0) 2026-04-05 00:43:10.938589 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop1) 2026-04-05 00:43:10.938607 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop2) 2026-04-05 00:43:10.938618 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop3) 2026-04-05 00:43:10.938638 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop4) 2026-04-05 00:43:10.938649 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop5) 2026-04-05 00:43:10.938659 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop6) 2026-04-05 00:43:10.938671 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop7) 2026-04-05 00:43:10.938681 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sda) 2026-04-05 00:43:10.938692 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdb) 2026-04-05 00:43:10.938701 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdc) 2026-04-05 00:43:10.938711 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdd) 2026-04-05 00:43:10.938721 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sr0) 2026-04-05 00:43:10.938731 | orchestrator | 2026-04-05 00:43:10.938742 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-05 00:43:10.938753 | orchestrator | Sunday 05 April 2026 00:43:07 +0000 (0:00:00.336) 0:00:34.555 ********** 2026-04-05 00:43:10.938763 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:43:10.938773 | orchestrator | 2026-04-05 00:43:10.938784 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-05 00:43:10.938795 | orchestrator | Sunday 05 April 2026 00:43:07 +0000 (0:00:00.175) 0:00:34.731 ********** 2026-04-05 00:43:10.938807 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:43:10.938818 | orchestrator | 2026-04-05 00:43:10.938828 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-05 00:43:10.938840 | orchestrator | Sunday 05 April 2026 00:43:07 +0000 (0:00:00.196) 0:00:34.928 ********** 2026-04-05 00:43:10.938851 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:43:10.938862 | orchestrator | 2026-04-05 00:43:10.938873 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-05 00:43:10.938885 | orchestrator | Sunday 05 April 2026 00:43:07 +0000 (0:00:00.175) 0:00:35.104 ********** 2026-04-05 00:43:10.938898 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:43:10.938909 | orchestrator | 2026-04-05 00:43:10.938920 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-05 00:43:10.938932 | orchestrator | Sunday 05 April 2026 00:43:08 +0000 (0:00:00.180) 0:00:35.284 ********** 2026-04-05 00:43:10.938945 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:43:10.938953 | orchestrator | 2026-04-05 00:43:10.938960 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-05 00:43:10.938967 | orchestrator | Sunday 05 April 2026 00:43:08 +0000 (0:00:00.215) 0:00:35.500 ********** 2026-04-05 00:43:10.938974 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:43:10.938981 | orchestrator | 2026-04-05 00:43:10.938988 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-05 00:43:10.938995 | orchestrator | Sunday 05 April 2026 00:43:08 +0000 (0:00:00.516) 0:00:36.016 ********** 2026-04-05 00:43:10.939002 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:43:10.939009 | orchestrator | 2026-04-05 00:43:10.939016 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-05 00:43:10.939023 | orchestrator | Sunday 05 April 2026 00:43:09 +0000 (0:00:00.193) 0:00:36.210 ********** 2026-04-05 00:43:10.939030 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:43:10.939037 | orchestrator | 2026-04-05 00:43:10.939044 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-05 00:43:10.939051 | orchestrator | Sunday 05 April 2026 00:43:09 +0000 (0:00:00.225) 0:00:36.436 ********** 2026-04-05 00:43:10.939058 | orchestrator | ok: [testbed-node-5] => (item=sda1) 2026-04-05 00:43:10.939074 | orchestrator | ok: [testbed-node-5] => (item=sda14) 2026-04-05 00:43:10.939082 | orchestrator | ok: [testbed-node-5] => (item=sda15) 2026-04-05 00:43:10.939089 | orchestrator | ok: [testbed-node-5] => (item=sda16) 2026-04-05 00:43:10.939096 | orchestrator | 2026-04-05 00:43:10.939103 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-05 00:43:10.939110 | orchestrator | Sunday 05 April 2026 00:43:09 +0000 (0:00:00.661) 0:00:37.098 ********** 2026-04-05 00:43:10.939117 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:43:10.939124 | orchestrator | 2026-04-05 00:43:10.939131 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-05 00:43:10.939139 | orchestrator | Sunday 05 April 2026 00:43:10 +0000 (0:00:00.305) 0:00:37.404 ********** 2026-04-05 00:43:10.939145 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:43:10.939152 | orchestrator | 2026-04-05 00:43:10.939160 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-05 00:43:10.939167 | orchestrator | Sunday 05 April 2026 00:43:10 +0000 (0:00:00.263) 0:00:37.668 ********** 2026-04-05 00:43:10.939174 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:43:10.939181 | orchestrator | 2026-04-05 00:43:10.939188 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-05 00:43:10.939195 | orchestrator | Sunday 05 April 2026 00:43:10 +0000 (0:00:00.213) 0:00:37.881 ********** 2026-04-05 00:43:10.939202 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:43:10.939209 | orchestrator | 2026-04-05 00:43:10.939225 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2026-04-05 00:43:15.557930 | orchestrator | Sunday 05 April 2026 00:43:10 +0000 (0:00:00.223) 0:00:38.105 ********** 2026-04-05 00:43:15.558050 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': None}) 2026-04-05 00:43:15.558062 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': None}) 2026-04-05 00:43:15.558069 | orchestrator | 2026-04-05 00:43:15.558077 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2026-04-05 00:43:15.558083 | orchestrator | Sunday 05 April 2026 00:43:11 +0000 (0:00:00.177) 0:00:38.282 ********** 2026-04-05 00:43:15.558090 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:43:15.558096 | orchestrator | 2026-04-05 00:43:15.558102 | orchestrator | TASK [Generate DB VG names] **************************************************** 2026-04-05 00:43:15.558108 | orchestrator | Sunday 05 April 2026 00:43:11 +0000 (0:00:00.136) 0:00:38.419 ********** 2026-04-05 00:43:15.558129 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:43:15.558136 | orchestrator | 2026-04-05 00:43:15.558143 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2026-04-05 00:43:15.558149 | orchestrator | Sunday 05 April 2026 00:43:11 +0000 (0:00:00.135) 0:00:38.554 ********** 2026-04-05 00:43:15.558155 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:43:15.558160 | orchestrator | 2026-04-05 00:43:15.558167 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2026-04-05 00:43:15.558173 | orchestrator | Sunday 05 April 2026 00:43:11 +0000 (0:00:00.137) 0:00:38.692 ********** 2026-04-05 00:43:15.558179 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:43:15.558187 | orchestrator | 2026-04-05 00:43:15.558193 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2026-04-05 00:43:15.558200 | orchestrator | Sunday 05 April 2026 00:43:11 +0000 (0:00:00.354) 0:00:39.047 ********** 2026-04-05 00:43:15.558206 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '01ae77dd-7b74-52e9-8a2e-c19e3ec8ad7a'}}) 2026-04-05 00:43:15.558215 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '1dbeab33-88c6-544f-8f85-2175dc04d523'}}) 2026-04-05 00:43:15.558222 | orchestrator | 2026-04-05 00:43:15.558228 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2026-04-05 00:43:15.558234 | orchestrator | Sunday 05 April 2026 00:43:12 +0000 (0:00:00.225) 0:00:39.272 ********** 2026-04-05 00:43:15.558240 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '01ae77dd-7b74-52e9-8a2e-c19e3ec8ad7a'}})  2026-04-05 00:43:15.558266 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '1dbeab33-88c6-544f-8f85-2175dc04d523'}})  2026-04-05 00:43:15.558273 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:43:15.558279 | orchestrator | 2026-04-05 00:43:15.558285 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2026-04-05 00:43:15.558291 | orchestrator | Sunday 05 April 2026 00:43:12 +0000 (0:00:00.182) 0:00:39.455 ********** 2026-04-05 00:43:15.558297 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '01ae77dd-7b74-52e9-8a2e-c19e3ec8ad7a'}})  2026-04-05 00:43:15.558303 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '1dbeab33-88c6-544f-8f85-2175dc04d523'}})  2026-04-05 00:43:15.558309 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:43:15.558315 | orchestrator | 2026-04-05 00:43:15.558321 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2026-04-05 00:43:15.558327 | orchestrator | Sunday 05 April 2026 00:43:12 +0000 (0:00:00.185) 0:00:39.641 ********** 2026-04-05 00:43:15.558333 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '01ae77dd-7b74-52e9-8a2e-c19e3ec8ad7a'}})  2026-04-05 00:43:15.558340 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '1dbeab33-88c6-544f-8f85-2175dc04d523'}})  2026-04-05 00:43:15.558345 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:43:15.558351 | orchestrator | 2026-04-05 00:43:15.558357 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2026-04-05 00:43:15.558363 | orchestrator | Sunday 05 April 2026 00:43:12 +0000 (0:00:00.224) 0:00:39.866 ********** 2026-04-05 00:43:15.558369 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:43:15.558375 | orchestrator | 2026-04-05 00:43:15.558381 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2026-04-05 00:43:15.558387 | orchestrator | Sunday 05 April 2026 00:43:12 +0000 (0:00:00.177) 0:00:40.043 ********** 2026-04-05 00:43:15.558393 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:43:15.558399 | orchestrator | 2026-04-05 00:43:15.558405 | orchestrator | TASK [Set DB devices config data] ********************************************** 2026-04-05 00:43:15.558411 | orchestrator | Sunday 05 April 2026 00:43:13 +0000 (0:00:00.154) 0:00:40.198 ********** 2026-04-05 00:43:15.558416 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:43:15.558422 | orchestrator | 2026-04-05 00:43:15.558428 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2026-04-05 00:43:15.558465 | orchestrator | Sunday 05 April 2026 00:43:13 +0000 (0:00:00.181) 0:00:40.380 ********** 2026-04-05 00:43:15.558471 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:43:15.558477 | orchestrator | 2026-04-05 00:43:15.558484 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2026-04-05 00:43:15.558489 | orchestrator | Sunday 05 April 2026 00:43:13 +0000 (0:00:00.134) 0:00:40.514 ********** 2026-04-05 00:43:15.558495 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:43:15.558502 | orchestrator | 2026-04-05 00:43:15.558508 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2026-04-05 00:43:15.558514 | orchestrator | Sunday 05 April 2026 00:43:13 +0000 (0:00:00.136) 0:00:40.650 ********** 2026-04-05 00:43:15.558520 | orchestrator | ok: [testbed-node-5] => { 2026-04-05 00:43:15.558526 | orchestrator |  "ceph_osd_devices": { 2026-04-05 00:43:15.558532 | orchestrator |  "sdb": { 2026-04-05 00:43:15.558553 | orchestrator |  "osd_lvm_uuid": "01ae77dd-7b74-52e9-8a2e-c19e3ec8ad7a" 2026-04-05 00:43:15.558560 | orchestrator |  }, 2026-04-05 00:43:15.558566 | orchestrator |  "sdc": { 2026-04-05 00:43:15.558572 | orchestrator |  "osd_lvm_uuid": "1dbeab33-88c6-544f-8f85-2175dc04d523" 2026-04-05 00:43:15.558579 | orchestrator |  } 2026-04-05 00:43:15.558584 | orchestrator |  } 2026-04-05 00:43:15.558591 | orchestrator | } 2026-04-05 00:43:15.558597 | orchestrator | 2026-04-05 00:43:15.558609 | orchestrator | TASK [Print WAL devices] ******************************************************* 2026-04-05 00:43:15.558615 | orchestrator | Sunday 05 April 2026 00:43:13 +0000 (0:00:00.131) 0:00:40.782 ********** 2026-04-05 00:43:15.558621 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:43:15.558627 | orchestrator | 2026-04-05 00:43:15.558634 | orchestrator | TASK [Print DB devices] ******************************************************** 2026-04-05 00:43:15.558640 | orchestrator | Sunday 05 April 2026 00:43:13 +0000 (0:00:00.139) 0:00:40.922 ********** 2026-04-05 00:43:15.558646 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:43:15.558652 | orchestrator | 2026-04-05 00:43:15.558657 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2026-04-05 00:43:15.558663 | orchestrator | Sunday 05 April 2026 00:43:14 +0000 (0:00:00.374) 0:00:41.297 ********** 2026-04-05 00:43:15.558669 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:43:15.558675 | orchestrator | 2026-04-05 00:43:15.558681 | orchestrator | TASK [Print configuration data] ************************************************ 2026-04-05 00:43:15.558687 | orchestrator | Sunday 05 April 2026 00:43:14 +0000 (0:00:00.140) 0:00:41.437 ********** 2026-04-05 00:43:15.558693 | orchestrator | changed: [testbed-node-5] => { 2026-04-05 00:43:15.558699 | orchestrator |  "_ceph_configure_lvm_config_data": { 2026-04-05 00:43:15.558705 | orchestrator |  "ceph_osd_devices": { 2026-04-05 00:43:15.558711 | orchestrator |  "sdb": { 2026-04-05 00:43:15.558717 | orchestrator |  "osd_lvm_uuid": "01ae77dd-7b74-52e9-8a2e-c19e3ec8ad7a" 2026-04-05 00:43:15.558723 | orchestrator |  }, 2026-04-05 00:43:15.558729 | orchestrator |  "sdc": { 2026-04-05 00:43:15.558735 | orchestrator |  "osd_lvm_uuid": "1dbeab33-88c6-544f-8f85-2175dc04d523" 2026-04-05 00:43:15.558741 | orchestrator |  } 2026-04-05 00:43:15.558748 | orchestrator |  }, 2026-04-05 00:43:15.558753 | orchestrator |  "lvm_volumes": [ 2026-04-05 00:43:15.558760 | orchestrator |  { 2026-04-05 00:43:15.558766 | orchestrator |  "data": "osd-block-01ae77dd-7b74-52e9-8a2e-c19e3ec8ad7a", 2026-04-05 00:43:15.558773 | orchestrator |  "data_vg": "ceph-01ae77dd-7b74-52e9-8a2e-c19e3ec8ad7a" 2026-04-05 00:43:15.558778 | orchestrator |  }, 2026-04-05 00:43:15.558787 | orchestrator |  { 2026-04-05 00:43:15.558793 | orchestrator |  "data": "osd-block-1dbeab33-88c6-544f-8f85-2175dc04d523", 2026-04-05 00:43:15.558799 | orchestrator |  "data_vg": "ceph-1dbeab33-88c6-544f-8f85-2175dc04d523" 2026-04-05 00:43:15.558805 | orchestrator |  } 2026-04-05 00:43:15.558812 | orchestrator |  ] 2026-04-05 00:43:15.558818 | orchestrator |  } 2026-04-05 00:43:15.558824 | orchestrator | } 2026-04-05 00:43:15.558830 | orchestrator | 2026-04-05 00:43:15.558836 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2026-04-05 00:43:15.558841 | orchestrator | Sunday 05 April 2026 00:43:14 +0000 (0:00:00.231) 0:00:41.669 ********** 2026-04-05 00:43:15.558847 | orchestrator | changed: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2026-04-05 00:43:15.558853 | orchestrator | 2026-04-05 00:43:15.558859 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-05 00:43:15.558866 | orchestrator | testbed-node-3 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-04-05 00:43:15.558874 | orchestrator | testbed-node-4 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-04-05 00:43:15.558880 | orchestrator | testbed-node-5 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-04-05 00:43:15.558886 | orchestrator | 2026-04-05 00:43:15.558892 | orchestrator | 2026-04-05 00:43:15.558898 | orchestrator | 2026-04-05 00:43:15.558904 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-05 00:43:15.558909 | orchestrator | Sunday 05 April 2026 00:43:15 +0000 (0:00:01.039) 0:00:42.708 ********** 2026-04-05 00:43:15.558920 | orchestrator | =============================================================================== 2026-04-05 00:43:15.558926 | orchestrator | Write configuration file ------------------------------------------------ 4.56s 2026-04-05 00:43:15.558932 | orchestrator | Add known partitions to the list of available block devices ------------- 1.24s 2026-04-05 00:43:15.558942 | orchestrator | Get initial list of available block devices ----------------------------- 1.21s 2026-04-05 00:43:15.558948 | orchestrator | Add known links to the list of available block devices ------------------ 1.19s 2026-04-05 00:43:15.558955 | orchestrator | Add known partitions to the list of available block devices ------------- 0.99s 2026-04-05 00:43:15.558961 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 0.94s 2026-04-05 00:43:15.558967 | orchestrator | Add known links to the list of available block devices ------------------ 0.87s 2026-04-05 00:43:15.558973 | orchestrator | Add known links to the list of available block devices ------------------ 0.76s 2026-04-05 00:43:15.558980 | orchestrator | Add known links to the list of available block devices ------------------ 0.69s 2026-04-05 00:43:15.558985 | orchestrator | Set UUIDs for OSD VGs/LVs ----------------------------------------------- 0.67s 2026-04-05 00:43:15.558991 | orchestrator | Add known partitions to the list of available block devices ------------- 0.67s 2026-04-05 00:43:15.558998 | orchestrator | Print configuration data ------------------------------------------------ 0.66s 2026-04-05 00:43:15.559003 | orchestrator | Generate lvm_volumes structure (block + wal) ---------------------------- 0.66s 2026-04-05 00:43:15.559014 | orchestrator | Add known partitions to the list of available block devices ------------- 0.66s 2026-04-05 00:43:15.910637 | orchestrator | Print DB devices -------------------------------------------------------- 0.65s 2026-04-05 00:43:15.910724 | orchestrator | Add known links to the list of available block devices ------------------ 0.65s 2026-04-05 00:43:15.910744 | orchestrator | Add known links to the list of available block devices ------------------ 0.64s 2026-04-05 00:43:15.910756 | orchestrator | Add known links to the list of available block devices ------------------ 0.64s 2026-04-05 00:43:15.910767 | orchestrator | Define lvm_volumes structures ------------------------------------------- 0.61s 2026-04-05 00:43:15.910778 | orchestrator | Generate lvm_volumes structure (block only) ----------------------------- 0.61s 2026-04-05 00:43:38.187349 | orchestrator | 2026-04-05 00:43:38 | INFO  | Task 0174fbee-5e50-49f8-a78c-85b7a72c4fd0 (sync inventory) is running in background. Output coming soon. 2026-04-05 00:44:10.545537 | orchestrator | 2026-04-05 00:43:39 | INFO  | Starting group_vars file reorganization 2026-04-05 00:44:10.545663 | orchestrator | 2026-04-05 00:43:39 | INFO  | Moved 0 file(s) to their respective directories 2026-04-05 00:44:10.545686 | orchestrator | 2026-04-05 00:43:39 | INFO  | Group_vars file reorganization completed 2026-04-05 00:44:10.545702 | orchestrator | 2026-04-05 00:43:42 | INFO  | Starting variable preparation from inventory 2026-04-05 00:44:10.545718 | orchestrator | 2026-04-05 00:43:45 | INFO  | Writing 050-kolla-ceph-rgw-hosts.yml with ceph_rgw_hosts 2026-04-05 00:44:10.545733 | orchestrator | 2026-04-05 00:43:45 | INFO  | Writing 050-infrastructure-cephclient-mons.yml with cephclient_mons 2026-04-05 00:44:10.545769 | orchestrator | 2026-04-05 00:43:45 | INFO  | Writing 050-ceph-cluster-fsid.yml with ceph_cluster_fsid 2026-04-05 00:44:10.545783 | orchestrator | 2026-04-05 00:43:45 | INFO  | 3 file(s) written, 6 host(s) processed 2026-04-05 00:44:10.545799 | orchestrator | 2026-04-05 00:43:45 | INFO  | Variable preparation completed 2026-04-05 00:44:10.545813 | orchestrator | 2026-04-05 00:43:47 | INFO  | Starting inventory overwrite handling 2026-04-05 00:44:10.545828 | orchestrator | 2026-04-05 00:43:47 | INFO  | Handling group overwrites in 99-overwrite 2026-04-05 00:44:10.545843 | orchestrator | 2026-04-05 00:43:47 | INFO  | Removing group frr:children from 60-generic 2026-04-05 00:44:10.545885 | orchestrator | 2026-04-05 00:43:47 | INFO  | Removing group netbird:children from 50-infrastructure 2026-04-05 00:44:10.545899 | orchestrator | 2026-04-05 00:43:47 | INFO  | Removing group ceph-mds from 50-ceph 2026-04-05 00:44:10.545913 | orchestrator | 2026-04-05 00:43:47 | INFO  | Removing group ceph-rgw from 50-ceph 2026-04-05 00:44:10.545925 | orchestrator | 2026-04-05 00:43:47 | INFO  | Handling group overwrites in 20-roles 2026-04-05 00:44:10.545937 | orchestrator | 2026-04-05 00:43:47 | INFO  | Removing group k3s_node from 50-infrastructure 2026-04-05 00:44:10.545949 | orchestrator | 2026-04-05 00:43:47 | INFO  | Removed 5 group(s) in total 2026-04-05 00:44:10.545961 | orchestrator | 2026-04-05 00:43:47 | INFO  | Inventory overwrite handling completed 2026-04-05 00:44:10.545974 | orchestrator | 2026-04-05 00:43:48 | INFO  | Starting merge of inventory files 2026-04-05 00:44:10.545986 | orchestrator | 2026-04-05 00:43:48 | INFO  | Inventory files merged successfully 2026-04-05 00:44:10.545997 | orchestrator | 2026-04-05 00:43:54 | INFO  | Generating minified hosts file 2026-04-05 00:44:10.546008 | orchestrator | 2026-04-05 00:43:55 | INFO  | Successfully wrote minified hosts file to /inventory.merge/hosts-minified.yml 2026-04-05 00:44:10.546079 | orchestrator | 2026-04-05 00:43:55 | INFO  | Successfully wrote fast inventory to /inventory.merge/fast/hosts.json 2026-04-05 00:44:10.546095 | orchestrator | 2026-04-05 00:43:57 | INFO  | Generating ClusterShell configuration from Ansible inventory 2026-04-05 00:44:10.546108 | orchestrator | 2026-04-05 00:44:09 | INFO  | Successfully wrote ClusterShell configuration 2026-04-05 00:44:10.546121 | orchestrator | [master 6f7db37] 2026-04-05-00-44 2026-04-05 00:44:10.546145 | orchestrator | 5 files changed, 75 insertions(+), 10 deletions(-) 2026-04-05 00:44:10.546161 | orchestrator | create mode 100644 fast/host_vars/testbed-node-3/ceph-lvm-configuration.yml 2026-04-05 00:44:10.546175 | orchestrator | create mode 100644 fast/host_vars/testbed-node-4/ceph-lvm-configuration.yml 2026-04-05 00:44:10.546190 | orchestrator | create mode 100644 fast/host_vars/testbed-node-5/ceph-lvm-configuration.yml 2026-04-05 00:44:11.988801 | orchestrator | 2026-04-05 00:44:11 | INFO  | Prepare task for execution of ceph-create-lvm-devices. 2026-04-05 00:44:12.054733 | orchestrator | 2026-04-05 00:44:12 | INFO  | Task 8d05587d-f8cf-4068-a444-ae3ff298560e (ceph-create-lvm-devices) was prepared for execution. 2026-04-05 00:44:12.054810 | orchestrator | 2026-04-05 00:44:12 | INFO  | It takes a moment until task 8d05587d-f8cf-4068-a444-ae3ff298560e (ceph-create-lvm-devices) has been started and output is visible here. 2026-04-05 00:44:25.350128 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-04-05 00:44:25.350202 | orchestrator | 2.16.14 2026-04-05 00:44:25.350210 | orchestrator | 2026-04-05 00:44:25.350215 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2026-04-05 00:44:25.350221 | orchestrator | 2026-04-05 00:44:25.350225 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-04-05 00:44:25.350229 | orchestrator | Sunday 05 April 2026 00:44:16 +0000 (0:00:00.278) 0:00:00.278 ********** 2026-04-05 00:44:25.350234 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-04-05 00:44:25.350238 | orchestrator | 2026-04-05 00:44:25.350243 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-04-05 00:44:25.350246 | orchestrator | Sunday 05 April 2026 00:44:16 +0000 (0:00:00.281) 0:00:00.560 ********** 2026-04-05 00:44:25.350250 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:44:25.350254 | orchestrator | 2026-04-05 00:44:25.350258 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-05 00:44:25.350262 | orchestrator | Sunday 05 April 2026 00:44:17 +0000 (0:00:00.285) 0:00:00.846 ********** 2026-04-05 00:44:25.350284 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop0) 2026-04-05 00:44:25.350288 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop1) 2026-04-05 00:44:25.350292 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop2) 2026-04-05 00:44:25.350296 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop3) 2026-04-05 00:44:25.350300 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop4) 2026-04-05 00:44:25.350304 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop5) 2026-04-05 00:44:25.350308 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop6) 2026-04-05 00:44:25.350312 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop7) 2026-04-05 00:44:25.350317 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sda) 2026-04-05 00:44:25.350321 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdb) 2026-04-05 00:44:25.350325 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdc) 2026-04-05 00:44:25.350328 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdd) 2026-04-05 00:44:25.350332 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sr0) 2026-04-05 00:44:25.350336 | orchestrator | 2026-04-05 00:44:25.350340 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-05 00:44:25.350344 | orchestrator | Sunday 05 April 2026 00:44:17 +0000 (0:00:00.451) 0:00:01.297 ********** 2026-04-05 00:44:25.350348 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:44:25.350351 | orchestrator | 2026-04-05 00:44:25.350355 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-05 00:44:25.350359 | orchestrator | Sunday 05 April 2026 00:44:18 +0000 (0:00:00.659) 0:00:01.957 ********** 2026-04-05 00:44:25.350363 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:44:25.350367 | orchestrator | 2026-04-05 00:44:25.350371 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-05 00:44:25.350375 | orchestrator | Sunday 05 April 2026 00:44:18 +0000 (0:00:00.283) 0:00:02.240 ********** 2026-04-05 00:44:25.350390 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:44:25.350394 | orchestrator | 2026-04-05 00:44:25.350398 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-05 00:44:25.350402 | orchestrator | Sunday 05 April 2026 00:44:18 +0000 (0:00:00.210) 0:00:02.451 ********** 2026-04-05 00:44:25.350406 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:44:25.350410 | orchestrator | 2026-04-05 00:44:25.350414 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-05 00:44:25.350418 | orchestrator | Sunday 05 April 2026 00:44:18 +0000 (0:00:00.222) 0:00:02.673 ********** 2026-04-05 00:44:25.350422 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:44:25.350426 | orchestrator | 2026-04-05 00:44:25.350430 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-05 00:44:25.350433 | orchestrator | Sunday 05 April 2026 00:44:19 +0000 (0:00:00.226) 0:00:02.899 ********** 2026-04-05 00:44:25.350437 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:44:25.350441 | orchestrator | 2026-04-05 00:44:25.350445 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-05 00:44:25.350449 | orchestrator | Sunday 05 April 2026 00:44:19 +0000 (0:00:00.252) 0:00:03.152 ********** 2026-04-05 00:44:25.350453 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:44:25.350457 | orchestrator | 2026-04-05 00:44:25.350461 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-05 00:44:25.350509 | orchestrator | Sunday 05 April 2026 00:44:19 +0000 (0:00:00.220) 0:00:03.372 ********** 2026-04-05 00:44:25.350514 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:44:25.350522 | orchestrator | 2026-04-05 00:44:25.350526 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-05 00:44:25.350530 | orchestrator | Sunday 05 April 2026 00:44:19 +0000 (0:00:00.212) 0:00:03.584 ********** 2026-04-05 00:44:25.350534 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_6cf61d5b-3b59-4520-9cc2-8285b407910f) 2026-04-05 00:44:25.350539 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_6cf61d5b-3b59-4520-9cc2-8285b407910f) 2026-04-05 00:44:25.350543 | orchestrator | 2026-04-05 00:44:25.350547 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-05 00:44:25.350560 | orchestrator | Sunday 05 April 2026 00:44:20 +0000 (0:00:00.490) 0:00:04.075 ********** 2026-04-05 00:44:25.350565 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_7e73ac44-76fe-4853-8c7e-76a35261b68e) 2026-04-05 00:44:25.350569 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_7e73ac44-76fe-4853-8c7e-76a35261b68e) 2026-04-05 00:44:25.350573 | orchestrator | 2026-04-05 00:44:25.350576 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-05 00:44:25.350580 | orchestrator | Sunday 05 April 2026 00:44:20 +0000 (0:00:00.458) 0:00:04.533 ********** 2026-04-05 00:44:25.350584 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_98068efd-febf-4a3d-a208-2ec8969defa3) 2026-04-05 00:44:25.350588 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_98068efd-febf-4a3d-a208-2ec8969defa3) 2026-04-05 00:44:25.350592 | orchestrator | 2026-04-05 00:44:25.350596 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-05 00:44:25.350600 | orchestrator | Sunday 05 April 2026 00:44:21 +0000 (0:00:00.761) 0:00:05.295 ********** 2026-04-05 00:44:25.350604 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_89f7f52a-567c-4cab-9983-76602271fa86) 2026-04-05 00:44:25.350608 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_89f7f52a-567c-4cab-9983-76602271fa86) 2026-04-05 00:44:25.350611 | orchestrator | 2026-04-05 00:44:25.350615 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-05 00:44:25.350619 | orchestrator | Sunday 05 April 2026 00:44:22 +0000 (0:00:00.859) 0:00:06.154 ********** 2026-04-05 00:44:25.350623 | orchestrator | ok: [testbed-node-3] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-04-05 00:44:25.350627 | orchestrator | 2026-04-05 00:44:25.350631 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-05 00:44:25.350639 | orchestrator | Sunday 05 April 2026 00:44:23 +0000 (0:00:00.777) 0:00:06.932 ********** 2026-04-05 00:44:25.350643 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop0) 2026-04-05 00:44:25.350647 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop1) 2026-04-05 00:44:25.350650 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop2) 2026-04-05 00:44:25.350654 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop3) 2026-04-05 00:44:25.350658 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop4) 2026-04-05 00:44:25.350662 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop5) 2026-04-05 00:44:25.350666 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop6) 2026-04-05 00:44:25.350670 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop7) 2026-04-05 00:44:25.350674 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sda) 2026-04-05 00:44:25.350677 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdb) 2026-04-05 00:44:25.350681 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdc) 2026-04-05 00:44:25.350685 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdd) 2026-04-05 00:44:25.350692 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sr0) 2026-04-05 00:44:25.350696 | orchestrator | 2026-04-05 00:44:25.350700 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-05 00:44:25.350704 | orchestrator | Sunday 05 April 2026 00:44:23 +0000 (0:00:00.556) 0:00:07.488 ********** 2026-04-05 00:44:25.350709 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:44:25.350713 | orchestrator | 2026-04-05 00:44:25.350718 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-05 00:44:25.350722 | orchestrator | Sunday 05 April 2026 00:44:23 +0000 (0:00:00.253) 0:00:07.742 ********** 2026-04-05 00:44:25.350727 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:44:25.350731 | orchestrator | 2026-04-05 00:44:25.350736 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-05 00:44:25.350740 | orchestrator | Sunday 05 April 2026 00:44:24 +0000 (0:00:00.238) 0:00:07.981 ********** 2026-04-05 00:44:25.350745 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:44:25.350749 | orchestrator | 2026-04-05 00:44:25.350754 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-05 00:44:25.350758 | orchestrator | Sunday 05 April 2026 00:44:24 +0000 (0:00:00.251) 0:00:08.232 ********** 2026-04-05 00:44:25.350763 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:44:25.350767 | orchestrator | 2026-04-05 00:44:25.350772 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-05 00:44:25.350776 | orchestrator | Sunday 05 April 2026 00:44:24 +0000 (0:00:00.235) 0:00:08.467 ********** 2026-04-05 00:44:25.350780 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:44:25.350785 | orchestrator | 2026-04-05 00:44:25.350789 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-05 00:44:25.350794 | orchestrator | Sunday 05 April 2026 00:44:24 +0000 (0:00:00.219) 0:00:08.687 ********** 2026-04-05 00:44:25.350798 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:44:25.350803 | orchestrator | 2026-04-05 00:44:25.350807 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-05 00:44:25.350812 | orchestrator | Sunday 05 April 2026 00:44:25 +0000 (0:00:00.227) 0:00:08.914 ********** 2026-04-05 00:44:25.350817 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:44:25.350821 | orchestrator | 2026-04-05 00:44:25.350828 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-05 00:44:33.312723 | orchestrator | Sunday 05 April 2026 00:44:25 +0000 (0:00:00.189) 0:00:09.103 ********** 2026-04-05 00:44:33.312829 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:44:33.312846 | orchestrator | 2026-04-05 00:44:33.312859 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-05 00:44:33.312871 | orchestrator | Sunday 05 April 2026 00:44:25 +0000 (0:00:00.203) 0:00:09.307 ********** 2026-04-05 00:44:33.312882 | orchestrator | ok: [testbed-node-3] => (item=sda1) 2026-04-05 00:44:33.312893 | orchestrator | ok: [testbed-node-3] => (item=sda14) 2026-04-05 00:44:33.312904 | orchestrator | ok: [testbed-node-3] => (item=sda15) 2026-04-05 00:44:33.312915 | orchestrator | ok: [testbed-node-3] => (item=sda16) 2026-04-05 00:44:33.312926 | orchestrator | 2026-04-05 00:44:33.312937 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-05 00:44:33.312947 | orchestrator | Sunday 05 April 2026 00:44:26 +0000 (0:00:01.110) 0:00:10.417 ********** 2026-04-05 00:44:33.312958 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:44:33.312968 | orchestrator | 2026-04-05 00:44:33.312979 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-05 00:44:33.312990 | orchestrator | Sunday 05 April 2026 00:44:26 +0000 (0:00:00.196) 0:00:10.613 ********** 2026-04-05 00:44:33.313000 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:44:33.313011 | orchestrator | 2026-04-05 00:44:33.313022 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-05 00:44:33.313060 | orchestrator | Sunday 05 April 2026 00:44:27 +0000 (0:00:00.213) 0:00:10.827 ********** 2026-04-05 00:44:33.313072 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:44:33.313083 | orchestrator | 2026-04-05 00:44:33.313094 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-05 00:44:33.313105 | orchestrator | Sunday 05 April 2026 00:44:27 +0000 (0:00:00.187) 0:00:11.014 ********** 2026-04-05 00:44:33.313115 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:44:33.313126 | orchestrator | 2026-04-05 00:44:33.313137 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2026-04-05 00:44:33.313148 | orchestrator | Sunday 05 April 2026 00:44:27 +0000 (0:00:00.191) 0:00:11.205 ********** 2026-04-05 00:44:33.313159 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:44:33.313169 | orchestrator | 2026-04-05 00:44:33.313180 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2026-04-05 00:44:33.313191 | orchestrator | Sunday 05 April 2026 00:44:27 +0000 (0:00:00.137) 0:00:11.342 ********** 2026-04-05 00:44:33.313202 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '9657aa76-f30a-575f-81fa-dc230eadde03'}}) 2026-04-05 00:44:33.313212 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '8a27db0d-e52c-5340-bfad-66c075ab1c61'}}) 2026-04-05 00:44:33.313223 | orchestrator | 2026-04-05 00:44:33.313234 | orchestrator | TASK [Create block VGs] ******************************************************** 2026-04-05 00:44:33.313244 | orchestrator | Sunday 05 April 2026 00:44:27 +0000 (0:00:00.223) 0:00:11.566 ********** 2026-04-05 00:44:33.313258 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-9657aa76-f30a-575f-81fa-dc230eadde03', 'data_vg': 'ceph-9657aa76-f30a-575f-81fa-dc230eadde03'}) 2026-04-05 00:44:33.313272 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-8a27db0d-e52c-5340-bfad-66c075ab1c61', 'data_vg': 'ceph-8a27db0d-e52c-5340-bfad-66c075ab1c61'}) 2026-04-05 00:44:33.313284 | orchestrator | 2026-04-05 00:44:33.313298 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2026-04-05 00:44:33.313310 | orchestrator | Sunday 05 April 2026 00:44:29 +0000 (0:00:01.906) 0:00:13.472 ********** 2026-04-05 00:44:33.313323 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-9657aa76-f30a-575f-81fa-dc230eadde03', 'data_vg': 'ceph-9657aa76-f30a-575f-81fa-dc230eadde03'})  2026-04-05 00:44:33.313356 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-8a27db0d-e52c-5340-bfad-66c075ab1c61', 'data_vg': 'ceph-8a27db0d-e52c-5340-bfad-66c075ab1c61'})  2026-04-05 00:44:33.313369 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:44:33.313382 | orchestrator | 2026-04-05 00:44:33.313394 | orchestrator | TASK [Create block LVs] ******************************************************** 2026-04-05 00:44:33.313407 | orchestrator | Sunday 05 April 2026 00:44:29 +0000 (0:00:00.156) 0:00:13.629 ********** 2026-04-05 00:44:33.313418 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-9657aa76-f30a-575f-81fa-dc230eadde03', 'data_vg': 'ceph-9657aa76-f30a-575f-81fa-dc230eadde03'}) 2026-04-05 00:44:33.313429 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-8a27db0d-e52c-5340-bfad-66c075ab1c61', 'data_vg': 'ceph-8a27db0d-e52c-5340-bfad-66c075ab1c61'}) 2026-04-05 00:44:33.313439 | orchestrator | 2026-04-05 00:44:33.313450 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2026-04-05 00:44:33.313461 | orchestrator | Sunday 05 April 2026 00:44:31 +0000 (0:00:01.451) 0:00:15.080 ********** 2026-04-05 00:44:33.313505 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-9657aa76-f30a-575f-81fa-dc230eadde03', 'data_vg': 'ceph-9657aa76-f30a-575f-81fa-dc230eadde03'})  2026-04-05 00:44:33.313526 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-8a27db0d-e52c-5340-bfad-66c075ab1c61', 'data_vg': 'ceph-8a27db0d-e52c-5340-bfad-66c075ab1c61'})  2026-04-05 00:44:33.313544 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:44:33.313563 | orchestrator | 2026-04-05 00:44:33.313583 | orchestrator | TASK [Create DB VGs] *********************************************************** 2026-04-05 00:44:33.313607 | orchestrator | Sunday 05 April 2026 00:44:31 +0000 (0:00:00.166) 0:00:15.247 ********** 2026-04-05 00:44:33.313636 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:44:33.313647 | orchestrator | 2026-04-05 00:44:33.313660 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2026-04-05 00:44:33.313678 | orchestrator | Sunday 05 April 2026 00:44:31 +0000 (0:00:00.129) 0:00:15.376 ********** 2026-04-05 00:44:33.313695 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-9657aa76-f30a-575f-81fa-dc230eadde03', 'data_vg': 'ceph-9657aa76-f30a-575f-81fa-dc230eadde03'})  2026-04-05 00:44:33.313712 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-8a27db0d-e52c-5340-bfad-66c075ab1c61', 'data_vg': 'ceph-8a27db0d-e52c-5340-bfad-66c075ab1c61'})  2026-04-05 00:44:33.313729 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:44:33.313746 | orchestrator | 2026-04-05 00:44:33.313763 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2026-04-05 00:44:33.313780 | orchestrator | Sunday 05 April 2026 00:44:31 +0000 (0:00:00.359) 0:00:15.736 ********** 2026-04-05 00:44:33.313796 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:44:33.313811 | orchestrator | 2026-04-05 00:44:33.313828 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2026-04-05 00:44:33.313846 | orchestrator | Sunday 05 April 2026 00:44:32 +0000 (0:00:00.122) 0:00:15.859 ********** 2026-04-05 00:44:33.313863 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-9657aa76-f30a-575f-81fa-dc230eadde03', 'data_vg': 'ceph-9657aa76-f30a-575f-81fa-dc230eadde03'})  2026-04-05 00:44:33.313881 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-8a27db0d-e52c-5340-bfad-66c075ab1c61', 'data_vg': 'ceph-8a27db0d-e52c-5340-bfad-66c075ab1c61'})  2026-04-05 00:44:33.313900 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:44:33.313920 | orchestrator | 2026-04-05 00:44:33.313948 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2026-04-05 00:44:33.313960 | orchestrator | Sunday 05 April 2026 00:44:32 +0000 (0:00:00.159) 0:00:16.019 ********** 2026-04-05 00:44:33.313971 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:44:33.313982 | orchestrator | 2026-04-05 00:44:33.313992 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2026-04-05 00:44:33.314003 | orchestrator | Sunday 05 April 2026 00:44:32 +0000 (0:00:00.131) 0:00:16.151 ********** 2026-04-05 00:44:33.314014 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-9657aa76-f30a-575f-81fa-dc230eadde03', 'data_vg': 'ceph-9657aa76-f30a-575f-81fa-dc230eadde03'})  2026-04-05 00:44:33.314094 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-8a27db0d-e52c-5340-bfad-66c075ab1c61', 'data_vg': 'ceph-8a27db0d-e52c-5340-bfad-66c075ab1c61'})  2026-04-05 00:44:33.314105 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:44:33.314116 | orchestrator | 2026-04-05 00:44:33.314129 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2026-04-05 00:44:33.314148 | orchestrator | Sunday 05 April 2026 00:44:32 +0000 (0:00:00.154) 0:00:16.306 ********** 2026-04-05 00:44:33.314165 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:44:33.314181 | orchestrator | 2026-04-05 00:44:33.314200 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2026-04-05 00:44:33.314220 | orchestrator | Sunday 05 April 2026 00:44:32 +0000 (0:00:00.138) 0:00:16.444 ********** 2026-04-05 00:44:33.314238 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-9657aa76-f30a-575f-81fa-dc230eadde03', 'data_vg': 'ceph-9657aa76-f30a-575f-81fa-dc230eadde03'})  2026-04-05 00:44:33.314257 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-8a27db0d-e52c-5340-bfad-66c075ab1c61', 'data_vg': 'ceph-8a27db0d-e52c-5340-bfad-66c075ab1c61'})  2026-04-05 00:44:33.314269 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:44:33.314280 | orchestrator | 2026-04-05 00:44:33.314291 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2026-04-05 00:44:33.314311 | orchestrator | Sunday 05 April 2026 00:44:32 +0000 (0:00:00.163) 0:00:16.609 ********** 2026-04-05 00:44:33.314322 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-9657aa76-f30a-575f-81fa-dc230eadde03', 'data_vg': 'ceph-9657aa76-f30a-575f-81fa-dc230eadde03'})  2026-04-05 00:44:33.314333 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-8a27db0d-e52c-5340-bfad-66c075ab1c61', 'data_vg': 'ceph-8a27db0d-e52c-5340-bfad-66c075ab1c61'})  2026-04-05 00:44:33.314344 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:44:33.314354 | orchestrator | 2026-04-05 00:44:33.314365 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2026-04-05 00:44:33.314376 | orchestrator | Sunday 05 April 2026 00:44:33 +0000 (0:00:00.167) 0:00:16.776 ********** 2026-04-05 00:44:33.314386 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-9657aa76-f30a-575f-81fa-dc230eadde03', 'data_vg': 'ceph-9657aa76-f30a-575f-81fa-dc230eadde03'})  2026-04-05 00:44:33.314397 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-8a27db0d-e52c-5340-bfad-66c075ab1c61', 'data_vg': 'ceph-8a27db0d-e52c-5340-bfad-66c075ab1c61'})  2026-04-05 00:44:33.314408 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:44:33.314418 | orchestrator | 2026-04-05 00:44:33.314429 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2026-04-05 00:44:33.314440 | orchestrator | Sunday 05 April 2026 00:44:33 +0000 (0:00:00.160) 0:00:16.936 ********** 2026-04-05 00:44:33.314450 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:44:33.314461 | orchestrator | 2026-04-05 00:44:33.314534 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2026-04-05 00:44:33.314566 | orchestrator | Sunday 05 April 2026 00:44:33 +0000 (0:00:00.132) 0:00:17.069 ********** 2026-04-05 00:44:39.908652 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:44:39.908753 | orchestrator | 2026-04-05 00:44:39.908767 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2026-04-05 00:44:39.908778 | orchestrator | Sunday 05 April 2026 00:44:33 +0000 (0:00:00.139) 0:00:17.208 ********** 2026-04-05 00:44:39.908788 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:44:39.908798 | orchestrator | 2026-04-05 00:44:39.908808 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2026-04-05 00:44:39.908818 | orchestrator | Sunday 05 April 2026 00:44:33 +0000 (0:00:00.140) 0:00:17.349 ********** 2026-04-05 00:44:39.908827 | orchestrator | ok: [testbed-node-3] => { 2026-04-05 00:44:39.908838 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2026-04-05 00:44:39.908848 | orchestrator | } 2026-04-05 00:44:39.908858 | orchestrator | 2026-04-05 00:44:39.908868 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2026-04-05 00:44:39.908877 | orchestrator | Sunday 05 April 2026 00:44:33 +0000 (0:00:00.360) 0:00:17.709 ********** 2026-04-05 00:44:39.908887 | orchestrator | ok: [testbed-node-3] => { 2026-04-05 00:44:39.908896 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2026-04-05 00:44:39.908906 | orchestrator | } 2026-04-05 00:44:39.908915 | orchestrator | 2026-04-05 00:44:39.908925 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2026-04-05 00:44:39.908934 | orchestrator | Sunday 05 April 2026 00:44:34 +0000 (0:00:00.164) 0:00:17.873 ********** 2026-04-05 00:44:39.908943 | orchestrator | ok: [testbed-node-3] => { 2026-04-05 00:44:39.908953 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2026-04-05 00:44:39.908963 | orchestrator | } 2026-04-05 00:44:39.908972 | orchestrator | 2026-04-05 00:44:39.908982 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2026-04-05 00:44:39.908991 | orchestrator | Sunday 05 April 2026 00:44:34 +0000 (0:00:00.175) 0:00:18.049 ********** 2026-04-05 00:44:39.909001 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:44:39.909011 | orchestrator | 2026-04-05 00:44:39.909021 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2026-04-05 00:44:39.909030 | orchestrator | Sunday 05 April 2026 00:44:35 +0000 (0:00:00.764) 0:00:18.813 ********** 2026-04-05 00:44:39.909064 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:44:39.909074 | orchestrator | 2026-04-05 00:44:39.909084 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2026-04-05 00:44:39.909093 | orchestrator | Sunday 05 April 2026 00:44:35 +0000 (0:00:00.486) 0:00:19.300 ********** 2026-04-05 00:44:39.909103 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:44:39.909112 | orchestrator | 2026-04-05 00:44:39.909122 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2026-04-05 00:44:39.909131 | orchestrator | Sunday 05 April 2026 00:44:36 +0000 (0:00:00.511) 0:00:19.812 ********** 2026-04-05 00:44:39.909141 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:44:39.909150 | orchestrator | 2026-04-05 00:44:39.909160 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2026-04-05 00:44:39.909169 | orchestrator | Sunday 05 April 2026 00:44:36 +0000 (0:00:00.134) 0:00:19.946 ********** 2026-04-05 00:44:39.909178 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:44:39.909188 | orchestrator | 2026-04-05 00:44:39.909200 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2026-04-05 00:44:39.909212 | orchestrator | Sunday 05 April 2026 00:44:36 +0000 (0:00:00.129) 0:00:20.076 ********** 2026-04-05 00:44:39.909224 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:44:39.909235 | orchestrator | 2026-04-05 00:44:39.909246 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2026-04-05 00:44:39.909257 | orchestrator | Sunday 05 April 2026 00:44:36 +0000 (0:00:00.105) 0:00:20.182 ********** 2026-04-05 00:44:39.909268 | orchestrator | ok: [testbed-node-3] => { 2026-04-05 00:44:39.909280 | orchestrator |  "vgs_report": { 2026-04-05 00:44:39.909292 | orchestrator |  "vg": [] 2026-04-05 00:44:39.909303 | orchestrator |  } 2026-04-05 00:44:39.909315 | orchestrator | } 2026-04-05 00:44:39.909325 | orchestrator | 2026-04-05 00:44:39.909336 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2026-04-05 00:44:39.909347 | orchestrator | Sunday 05 April 2026 00:44:36 +0000 (0:00:00.134) 0:00:20.317 ********** 2026-04-05 00:44:39.909358 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:44:39.909369 | orchestrator | 2026-04-05 00:44:39.909380 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2026-04-05 00:44:39.909392 | orchestrator | Sunday 05 April 2026 00:44:36 +0000 (0:00:00.151) 0:00:20.469 ********** 2026-04-05 00:44:39.909402 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:44:39.909414 | orchestrator | 2026-04-05 00:44:39.909425 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2026-04-05 00:44:39.909436 | orchestrator | Sunday 05 April 2026 00:44:36 +0000 (0:00:00.143) 0:00:20.612 ********** 2026-04-05 00:44:39.909448 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:44:39.909459 | orchestrator | 2026-04-05 00:44:39.909488 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2026-04-05 00:44:39.909500 | orchestrator | Sunday 05 April 2026 00:44:37 +0000 (0:00:00.340) 0:00:20.953 ********** 2026-04-05 00:44:39.909511 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:44:39.909522 | orchestrator | 2026-04-05 00:44:39.909533 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2026-04-05 00:44:39.909545 | orchestrator | Sunday 05 April 2026 00:44:37 +0000 (0:00:00.150) 0:00:21.103 ********** 2026-04-05 00:44:39.909556 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:44:39.909567 | orchestrator | 2026-04-05 00:44:39.909577 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2026-04-05 00:44:39.909586 | orchestrator | Sunday 05 April 2026 00:44:37 +0000 (0:00:00.142) 0:00:21.246 ********** 2026-04-05 00:44:39.909595 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:44:39.909605 | orchestrator | 2026-04-05 00:44:39.909614 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2026-04-05 00:44:39.909624 | orchestrator | Sunday 05 April 2026 00:44:37 +0000 (0:00:00.137) 0:00:21.383 ********** 2026-04-05 00:44:39.909633 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:44:39.909649 | orchestrator | 2026-04-05 00:44:39.909659 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2026-04-05 00:44:39.909669 | orchestrator | Sunday 05 April 2026 00:44:37 +0000 (0:00:00.169) 0:00:21.554 ********** 2026-04-05 00:44:39.909693 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:44:39.909703 | orchestrator | 2026-04-05 00:44:39.909729 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2026-04-05 00:44:39.909739 | orchestrator | Sunday 05 April 2026 00:44:37 +0000 (0:00:00.144) 0:00:21.699 ********** 2026-04-05 00:44:39.909748 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:44:39.909757 | orchestrator | 2026-04-05 00:44:39.909767 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2026-04-05 00:44:39.909777 | orchestrator | Sunday 05 April 2026 00:44:38 +0000 (0:00:00.144) 0:00:21.843 ********** 2026-04-05 00:44:39.909786 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:44:39.909796 | orchestrator | 2026-04-05 00:44:39.909805 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2026-04-05 00:44:39.909814 | orchestrator | Sunday 05 April 2026 00:44:38 +0000 (0:00:00.140) 0:00:21.984 ********** 2026-04-05 00:44:39.909824 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:44:39.909833 | orchestrator | 2026-04-05 00:44:39.909843 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2026-04-05 00:44:39.909852 | orchestrator | Sunday 05 April 2026 00:44:38 +0000 (0:00:00.129) 0:00:22.114 ********** 2026-04-05 00:44:39.909861 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:44:39.909871 | orchestrator | 2026-04-05 00:44:39.909880 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2026-04-05 00:44:39.909889 | orchestrator | Sunday 05 April 2026 00:44:38 +0000 (0:00:00.152) 0:00:22.266 ********** 2026-04-05 00:44:39.909899 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:44:39.909908 | orchestrator | 2026-04-05 00:44:39.909918 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2026-04-05 00:44:39.909927 | orchestrator | Sunday 05 April 2026 00:44:38 +0000 (0:00:00.135) 0:00:22.401 ********** 2026-04-05 00:44:39.909937 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:44:39.909946 | orchestrator | 2026-04-05 00:44:39.909960 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2026-04-05 00:44:39.909970 | orchestrator | Sunday 05 April 2026 00:44:38 +0000 (0:00:00.147) 0:00:22.549 ********** 2026-04-05 00:44:39.909981 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-9657aa76-f30a-575f-81fa-dc230eadde03', 'data_vg': 'ceph-9657aa76-f30a-575f-81fa-dc230eadde03'})  2026-04-05 00:44:39.909992 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-8a27db0d-e52c-5340-bfad-66c075ab1c61', 'data_vg': 'ceph-8a27db0d-e52c-5340-bfad-66c075ab1c61'})  2026-04-05 00:44:39.910001 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:44:39.910010 | orchestrator | 2026-04-05 00:44:39.910080 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2026-04-05 00:44:39.910091 | orchestrator | Sunday 05 April 2026 00:44:38 +0000 (0:00:00.169) 0:00:22.719 ********** 2026-04-05 00:44:39.910101 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-9657aa76-f30a-575f-81fa-dc230eadde03', 'data_vg': 'ceph-9657aa76-f30a-575f-81fa-dc230eadde03'})  2026-04-05 00:44:39.910110 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-8a27db0d-e52c-5340-bfad-66c075ab1c61', 'data_vg': 'ceph-8a27db0d-e52c-5340-bfad-66c075ab1c61'})  2026-04-05 00:44:39.910120 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:44:39.910129 | orchestrator | 2026-04-05 00:44:39.910139 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2026-04-05 00:44:39.910148 | orchestrator | Sunday 05 April 2026 00:44:39 +0000 (0:00:00.382) 0:00:23.102 ********** 2026-04-05 00:44:39.910158 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-9657aa76-f30a-575f-81fa-dc230eadde03', 'data_vg': 'ceph-9657aa76-f30a-575f-81fa-dc230eadde03'})  2026-04-05 00:44:39.910167 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-8a27db0d-e52c-5340-bfad-66c075ab1c61', 'data_vg': 'ceph-8a27db0d-e52c-5340-bfad-66c075ab1c61'})  2026-04-05 00:44:39.910184 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:44:39.910194 | orchestrator | 2026-04-05 00:44:39.910203 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2026-04-05 00:44:39.910213 | orchestrator | Sunday 05 April 2026 00:44:39 +0000 (0:00:00.176) 0:00:23.278 ********** 2026-04-05 00:44:39.910222 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-9657aa76-f30a-575f-81fa-dc230eadde03', 'data_vg': 'ceph-9657aa76-f30a-575f-81fa-dc230eadde03'})  2026-04-05 00:44:39.910232 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-8a27db0d-e52c-5340-bfad-66c075ab1c61', 'data_vg': 'ceph-8a27db0d-e52c-5340-bfad-66c075ab1c61'})  2026-04-05 00:44:39.910241 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:44:39.910250 | orchestrator | 2026-04-05 00:44:39.910260 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2026-04-05 00:44:39.910269 | orchestrator | Sunday 05 April 2026 00:44:39 +0000 (0:00:00.172) 0:00:23.451 ********** 2026-04-05 00:44:39.910279 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-9657aa76-f30a-575f-81fa-dc230eadde03', 'data_vg': 'ceph-9657aa76-f30a-575f-81fa-dc230eadde03'})  2026-04-05 00:44:39.910288 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-8a27db0d-e52c-5340-bfad-66c075ab1c61', 'data_vg': 'ceph-8a27db0d-e52c-5340-bfad-66c075ab1c61'})  2026-04-05 00:44:39.910298 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:44:39.910307 | orchestrator | 2026-04-05 00:44:39.910316 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2026-04-05 00:44:39.910326 | orchestrator | Sunday 05 April 2026 00:44:39 +0000 (0:00:00.154) 0:00:23.606 ********** 2026-04-05 00:44:39.910342 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-9657aa76-f30a-575f-81fa-dc230eadde03', 'data_vg': 'ceph-9657aa76-f30a-575f-81fa-dc230eadde03'})  2026-04-05 00:44:45.450300 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-8a27db0d-e52c-5340-bfad-66c075ab1c61', 'data_vg': 'ceph-8a27db0d-e52c-5340-bfad-66c075ab1c61'})  2026-04-05 00:44:45.450407 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:44:45.450414 | orchestrator | 2026-04-05 00:44:45.450421 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2026-04-05 00:44:45.450428 | orchestrator | Sunday 05 April 2026 00:44:39 +0000 (0:00:00.142) 0:00:23.748 ********** 2026-04-05 00:44:45.450434 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-9657aa76-f30a-575f-81fa-dc230eadde03', 'data_vg': 'ceph-9657aa76-f30a-575f-81fa-dc230eadde03'})  2026-04-05 00:44:45.450439 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-8a27db0d-e52c-5340-bfad-66c075ab1c61', 'data_vg': 'ceph-8a27db0d-e52c-5340-bfad-66c075ab1c61'})  2026-04-05 00:44:45.450444 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:44:45.450448 | orchestrator | 2026-04-05 00:44:45.450453 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2026-04-05 00:44:45.450458 | orchestrator | Sunday 05 April 2026 00:44:40 +0000 (0:00:00.158) 0:00:23.906 ********** 2026-04-05 00:44:45.450462 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-9657aa76-f30a-575f-81fa-dc230eadde03', 'data_vg': 'ceph-9657aa76-f30a-575f-81fa-dc230eadde03'})  2026-04-05 00:44:45.450522 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-8a27db0d-e52c-5340-bfad-66c075ab1c61', 'data_vg': 'ceph-8a27db0d-e52c-5340-bfad-66c075ab1c61'})  2026-04-05 00:44:45.450528 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:44:45.450533 | orchestrator | 2026-04-05 00:44:45.450538 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2026-04-05 00:44:45.450543 | orchestrator | Sunday 05 April 2026 00:44:40 +0000 (0:00:00.174) 0:00:24.081 ********** 2026-04-05 00:44:45.450547 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:44:45.450553 | orchestrator | 2026-04-05 00:44:45.450575 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2026-04-05 00:44:45.450579 | orchestrator | Sunday 05 April 2026 00:44:40 +0000 (0:00:00.526) 0:00:24.608 ********** 2026-04-05 00:44:45.450584 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:44:45.450588 | orchestrator | 2026-04-05 00:44:45.450593 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2026-04-05 00:44:45.450597 | orchestrator | Sunday 05 April 2026 00:44:41 +0000 (0:00:00.507) 0:00:25.115 ********** 2026-04-05 00:44:45.450602 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:44:45.450606 | orchestrator | 2026-04-05 00:44:45.450611 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2026-04-05 00:44:45.450615 | orchestrator | Sunday 05 April 2026 00:44:41 +0000 (0:00:00.159) 0:00:25.274 ********** 2026-04-05 00:44:45.450620 | orchestrator | ok: [testbed-node-3] => (item={'lv_name': 'osd-block-8a27db0d-e52c-5340-bfad-66c075ab1c61', 'vg_name': 'ceph-8a27db0d-e52c-5340-bfad-66c075ab1c61'}) 2026-04-05 00:44:45.450626 | orchestrator | ok: [testbed-node-3] => (item={'lv_name': 'osd-block-9657aa76-f30a-575f-81fa-dc230eadde03', 'vg_name': 'ceph-9657aa76-f30a-575f-81fa-dc230eadde03'}) 2026-04-05 00:44:45.450631 | orchestrator | 2026-04-05 00:44:45.450636 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2026-04-05 00:44:45.450640 | orchestrator | Sunday 05 April 2026 00:44:41 +0000 (0:00:00.233) 0:00:25.508 ********** 2026-04-05 00:44:45.450645 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-9657aa76-f30a-575f-81fa-dc230eadde03', 'data_vg': 'ceph-9657aa76-f30a-575f-81fa-dc230eadde03'})  2026-04-05 00:44:45.450650 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-8a27db0d-e52c-5340-bfad-66c075ab1c61', 'data_vg': 'ceph-8a27db0d-e52c-5340-bfad-66c075ab1c61'})  2026-04-05 00:44:45.450654 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:44:45.450659 | orchestrator | 2026-04-05 00:44:45.450663 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2026-04-05 00:44:45.450668 | orchestrator | Sunday 05 April 2026 00:44:41 +0000 (0:00:00.158) 0:00:25.666 ********** 2026-04-05 00:44:45.450673 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-9657aa76-f30a-575f-81fa-dc230eadde03', 'data_vg': 'ceph-9657aa76-f30a-575f-81fa-dc230eadde03'})  2026-04-05 00:44:45.450677 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-8a27db0d-e52c-5340-bfad-66c075ab1c61', 'data_vg': 'ceph-8a27db0d-e52c-5340-bfad-66c075ab1c61'})  2026-04-05 00:44:45.450682 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:44:45.450686 | orchestrator | 2026-04-05 00:44:45.450691 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2026-04-05 00:44:45.450695 | orchestrator | Sunday 05 April 2026 00:44:42 +0000 (0:00:00.415) 0:00:26.082 ********** 2026-04-05 00:44:45.450700 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-9657aa76-f30a-575f-81fa-dc230eadde03', 'data_vg': 'ceph-9657aa76-f30a-575f-81fa-dc230eadde03'})  2026-04-05 00:44:45.450704 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-8a27db0d-e52c-5340-bfad-66c075ab1c61', 'data_vg': 'ceph-8a27db0d-e52c-5340-bfad-66c075ab1c61'})  2026-04-05 00:44:45.450709 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:44:45.450713 | orchestrator | 2026-04-05 00:44:45.450718 | orchestrator | TASK [Print LVM report data] *************************************************** 2026-04-05 00:44:45.450722 | orchestrator | Sunday 05 April 2026 00:44:42 +0000 (0:00:00.159) 0:00:26.241 ********** 2026-04-05 00:44:45.450740 | orchestrator | ok: [testbed-node-3] => { 2026-04-05 00:44:45.450745 | orchestrator |  "lvm_report": { 2026-04-05 00:44:45.450751 | orchestrator |  "lv": [ 2026-04-05 00:44:45.450756 | orchestrator |  { 2026-04-05 00:44:45.450761 | orchestrator |  "lv_name": "osd-block-8a27db0d-e52c-5340-bfad-66c075ab1c61", 2026-04-05 00:44:45.450766 | orchestrator |  "vg_name": "ceph-8a27db0d-e52c-5340-bfad-66c075ab1c61" 2026-04-05 00:44:45.450771 | orchestrator |  }, 2026-04-05 00:44:45.450780 | orchestrator |  { 2026-04-05 00:44:45.450785 | orchestrator |  "lv_name": "osd-block-9657aa76-f30a-575f-81fa-dc230eadde03", 2026-04-05 00:44:45.450789 | orchestrator |  "vg_name": "ceph-9657aa76-f30a-575f-81fa-dc230eadde03" 2026-04-05 00:44:45.450794 | orchestrator |  } 2026-04-05 00:44:45.450799 | orchestrator |  ], 2026-04-05 00:44:45.450803 | orchestrator |  "pv": [ 2026-04-05 00:44:45.450808 | orchestrator |  { 2026-04-05 00:44:45.450812 | orchestrator |  "pv_name": "/dev/sdb", 2026-04-05 00:44:45.450817 | orchestrator |  "vg_name": "ceph-9657aa76-f30a-575f-81fa-dc230eadde03" 2026-04-05 00:44:45.450822 | orchestrator |  }, 2026-04-05 00:44:45.450826 | orchestrator |  { 2026-04-05 00:44:45.450832 | orchestrator |  "pv_name": "/dev/sdc", 2026-04-05 00:44:45.450837 | orchestrator |  "vg_name": "ceph-8a27db0d-e52c-5340-bfad-66c075ab1c61" 2026-04-05 00:44:45.450842 | orchestrator |  } 2026-04-05 00:44:45.450848 | orchestrator |  ] 2026-04-05 00:44:45.450853 | orchestrator |  } 2026-04-05 00:44:45.450859 | orchestrator | } 2026-04-05 00:44:45.450865 | orchestrator | 2026-04-05 00:44:45.450870 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2026-04-05 00:44:45.450875 | orchestrator | 2026-04-05 00:44:45.450881 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-04-05 00:44:45.450886 | orchestrator | Sunday 05 April 2026 00:44:42 +0000 (0:00:00.317) 0:00:26.559 ********** 2026-04-05 00:44:45.450892 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2026-04-05 00:44:45.450897 | orchestrator | 2026-04-05 00:44:45.450902 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-04-05 00:44:45.450908 | orchestrator | Sunday 05 April 2026 00:44:43 +0000 (0:00:00.253) 0:00:26.813 ********** 2026-04-05 00:44:45.450913 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:44:45.450918 | orchestrator | 2026-04-05 00:44:45.450924 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-05 00:44:45.450929 | orchestrator | Sunday 05 April 2026 00:44:43 +0000 (0:00:00.245) 0:00:27.058 ********** 2026-04-05 00:44:45.450934 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop0) 2026-04-05 00:44:45.450939 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop1) 2026-04-05 00:44:45.450944 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop2) 2026-04-05 00:44:45.450949 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop3) 2026-04-05 00:44:45.450955 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop4) 2026-04-05 00:44:45.450960 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop5) 2026-04-05 00:44:45.450965 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop6) 2026-04-05 00:44:45.450970 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop7) 2026-04-05 00:44:45.450976 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sda) 2026-04-05 00:44:45.450986 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdb) 2026-04-05 00:44:45.450992 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdc) 2026-04-05 00:44:45.450997 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdd) 2026-04-05 00:44:45.451003 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sr0) 2026-04-05 00:44:45.451008 | orchestrator | 2026-04-05 00:44:45.451013 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-05 00:44:45.451018 | orchestrator | Sunday 05 April 2026 00:44:43 +0000 (0:00:00.422) 0:00:27.480 ********** 2026-04-05 00:44:45.451024 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:44:45.451033 | orchestrator | 2026-04-05 00:44:45.451038 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-05 00:44:45.451043 | orchestrator | Sunday 05 April 2026 00:44:43 +0000 (0:00:00.196) 0:00:27.677 ********** 2026-04-05 00:44:45.451049 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:44:45.451054 | orchestrator | 2026-04-05 00:44:45.451060 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-05 00:44:45.451065 | orchestrator | Sunday 05 April 2026 00:44:44 +0000 (0:00:00.194) 0:00:27.871 ********** 2026-04-05 00:44:45.451070 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:44:45.451075 | orchestrator | 2026-04-05 00:44:45.451080 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-05 00:44:45.451085 | orchestrator | Sunday 05 April 2026 00:44:44 +0000 (0:00:00.223) 0:00:28.095 ********** 2026-04-05 00:44:45.451091 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:44:45.451096 | orchestrator | 2026-04-05 00:44:45.451102 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-05 00:44:45.451107 | orchestrator | Sunday 05 April 2026 00:44:44 +0000 (0:00:00.653) 0:00:28.748 ********** 2026-04-05 00:44:45.451112 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:44:45.451117 | orchestrator | 2026-04-05 00:44:45.451123 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-05 00:44:45.451128 | orchestrator | Sunday 05 April 2026 00:44:45 +0000 (0:00:00.209) 0:00:28.958 ********** 2026-04-05 00:44:45.451133 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:44:45.451138 | orchestrator | 2026-04-05 00:44:45.451146 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-05 00:44:55.461523 | orchestrator | Sunday 05 April 2026 00:44:45 +0000 (0:00:00.250) 0:00:29.209 ********** 2026-04-05 00:44:55.461611 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:44:55.461622 | orchestrator | 2026-04-05 00:44:55.461631 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-05 00:44:55.461640 | orchestrator | Sunday 05 April 2026 00:44:45 +0000 (0:00:00.225) 0:00:29.434 ********** 2026-04-05 00:44:55.461649 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:44:55.461657 | orchestrator | 2026-04-05 00:44:55.461665 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-05 00:44:55.461672 | orchestrator | Sunday 05 April 2026 00:44:45 +0000 (0:00:00.240) 0:00:29.675 ********** 2026-04-05 00:44:55.461680 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_60b18cd9-cde4-4a47-bd2f-2a39c218ea3e) 2026-04-05 00:44:55.461689 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_60b18cd9-cde4-4a47-bd2f-2a39c218ea3e) 2026-04-05 00:44:55.461696 | orchestrator | 2026-04-05 00:44:55.461704 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-05 00:44:55.461711 | orchestrator | Sunday 05 April 2026 00:44:46 +0000 (0:00:00.446) 0:00:30.121 ********** 2026-04-05 00:44:55.461718 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_cd3e0233-fa53-4a76-8124-17084efe5189) 2026-04-05 00:44:55.461726 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_cd3e0233-fa53-4a76-8124-17084efe5189) 2026-04-05 00:44:55.461734 | orchestrator | 2026-04-05 00:44:55.461754 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-05 00:44:55.461762 | orchestrator | Sunday 05 April 2026 00:44:46 +0000 (0:00:00.434) 0:00:30.556 ********** 2026-04-05 00:44:55.461770 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_38b6e962-bf0a-4437-92be-df56b43fc17a) 2026-04-05 00:44:55.461777 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_38b6e962-bf0a-4437-92be-df56b43fc17a) 2026-04-05 00:44:55.461784 | orchestrator | 2026-04-05 00:44:55.461792 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-05 00:44:55.461800 | orchestrator | Sunday 05 April 2026 00:44:47 +0000 (0:00:00.436) 0:00:30.992 ********** 2026-04-05 00:44:55.461807 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_ca139ca2-9428-4862-b2c5-b387113f92e8) 2026-04-05 00:44:55.461834 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_ca139ca2-9428-4862-b2c5-b387113f92e8) 2026-04-05 00:44:55.461842 | orchestrator | 2026-04-05 00:44:55.461850 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-05 00:44:55.461858 | orchestrator | Sunday 05 April 2026 00:44:47 +0000 (0:00:00.461) 0:00:31.454 ********** 2026-04-05 00:44:55.461866 | orchestrator | ok: [testbed-node-4] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-04-05 00:44:55.461874 | orchestrator | 2026-04-05 00:44:55.461882 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-05 00:44:55.461890 | orchestrator | Sunday 05 April 2026 00:44:48 +0000 (0:00:00.376) 0:00:31.831 ********** 2026-04-05 00:44:55.461897 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop0) 2026-04-05 00:44:55.461906 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop1) 2026-04-05 00:44:55.461914 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop2) 2026-04-05 00:44:55.461922 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop3) 2026-04-05 00:44:55.461930 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop4) 2026-04-05 00:44:55.461937 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop5) 2026-04-05 00:44:55.461944 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop6) 2026-04-05 00:44:55.461951 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop7) 2026-04-05 00:44:55.461957 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sda) 2026-04-05 00:44:55.461965 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdb) 2026-04-05 00:44:55.461973 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdc) 2026-04-05 00:44:55.461981 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdd) 2026-04-05 00:44:55.461989 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sr0) 2026-04-05 00:44:55.461997 | orchestrator | 2026-04-05 00:44:55.462005 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-05 00:44:55.462053 | orchestrator | Sunday 05 April 2026 00:44:48 +0000 (0:00:00.649) 0:00:32.480 ********** 2026-04-05 00:44:55.462061 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:44:55.462069 | orchestrator | 2026-04-05 00:44:55.462076 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-05 00:44:55.462083 | orchestrator | Sunday 05 April 2026 00:44:48 +0000 (0:00:00.197) 0:00:32.678 ********** 2026-04-05 00:44:55.462090 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:44:55.462098 | orchestrator | 2026-04-05 00:44:55.462105 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-05 00:44:55.462112 | orchestrator | Sunday 05 April 2026 00:44:49 +0000 (0:00:00.240) 0:00:32.918 ********** 2026-04-05 00:44:55.462119 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:44:55.462126 | orchestrator | 2026-04-05 00:44:55.462147 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-05 00:44:55.462155 | orchestrator | Sunday 05 April 2026 00:44:49 +0000 (0:00:00.222) 0:00:33.141 ********** 2026-04-05 00:44:55.462163 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:44:55.462170 | orchestrator | 2026-04-05 00:44:55.462176 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-05 00:44:55.462183 | orchestrator | Sunday 05 April 2026 00:44:49 +0000 (0:00:00.202) 0:00:33.343 ********** 2026-04-05 00:44:55.462190 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:44:55.462198 | orchestrator | 2026-04-05 00:44:55.462205 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-05 00:44:55.462219 | orchestrator | Sunday 05 April 2026 00:44:49 +0000 (0:00:00.200) 0:00:33.544 ********** 2026-04-05 00:44:55.462227 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:44:55.462235 | orchestrator | 2026-04-05 00:44:55.462242 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-05 00:44:55.462250 | orchestrator | Sunday 05 April 2026 00:44:49 +0000 (0:00:00.179) 0:00:33.723 ********** 2026-04-05 00:44:55.462258 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:44:55.462266 | orchestrator | 2026-04-05 00:44:55.462273 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-05 00:44:55.462281 | orchestrator | Sunday 05 April 2026 00:44:50 +0000 (0:00:00.193) 0:00:33.916 ********** 2026-04-05 00:44:55.462288 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:44:55.462296 | orchestrator | 2026-04-05 00:44:55.462303 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-05 00:44:55.462314 | orchestrator | Sunday 05 April 2026 00:44:50 +0000 (0:00:00.187) 0:00:34.104 ********** 2026-04-05 00:44:55.462321 | orchestrator | ok: [testbed-node-4] => (item=sda1) 2026-04-05 00:44:55.462328 | orchestrator | ok: [testbed-node-4] => (item=sda14) 2026-04-05 00:44:55.462335 | orchestrator | ok: [testbed-node-4] => (item=sda15) 2026-04-05 00:44:55.462342 | orchestrator | ok: [testbed-node-4] => (item=sda16) 2026-04-05 00:44:55.462349 | orchestrator | 2026-04-05 00:44:55.462355 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-05 00:44:55.462362 | orchestrator | Sunday 05 April 2026 00:44:51 +0000 (0:00:00.726) 0:00:34.830 ********** 2026-04-05 00:44:55.462369 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:44:55.462376 | orchestrator | 2026-04-05 00:44:55.462383 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-05 00:44:55.462390 | orchestrator | Sunday 05 April 2026 00:44:51 +0000 (0:00:00.188) 0:00:35.019 ********** 2026-04-05 00:44:55.462396 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:44:55.462403 | orchestrator | 2026-04-05 00:44:55.462410 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-05 00:44:55.462417 | orchestrator | Sunday 05 April 2026 00:44:51 +0000 (0:00:00.201) 0:00:35.220 ********** 2026-04-05 00:44:55.462424 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:44:55.462431 | orchestrator | 2026-04-05 00:44:55.462438 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-05 00:44:55.462445 | orchestrator | Sunday 05 April 2026 00:44:51 +0000 (0:00:00.515) 0:00:35.736 ********** 2026-04-05 00:44:55.462451 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:44:55.462458 | orchestrator | 2026-04-05 00:44:55.462465 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2026-04-05 00:44:55.462472 | orchestrator | Sunday 05 April 2026 00:44:52 +0000 (0:00:00.182) 0:00:35.919 ********** 2026-04-05 00:44:55.462491 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:44:55.462498 | orchestrator | 2026-04-05 00:44:55.462505 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2026-04-05 00:44:55.462512 | orchestrator | Sunday 05 April 2026 00:44:52 +0000 (0:00:00.147) 0:00:36.067 ********** 2026-04-05 00:44:55.462519 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '84662fb7-c7ec-5f43-83c1-849532919194'}}) 2026-04-05 00:44:55.462526 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'df39e39b-9449-5ecb-9afa-151663e06960'}}) 2026-04-05 00:44:55.462534 | orchestrator | 2026-04-05 00:44:55.462541 | orchestrator | TASK [Create block VGs] ******************************************************** 2026-04-05 00:44:55.462548 | orchestrator | Sunday 05 April 2026 00:44:52 +0000 (0:00:00.197) 0:00:36.264 ********** 2026-04-05 00:44:55.462555 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-84662fb7-c7ec-5f43-83c1-849532919194', 'data_vg': 'ceph-84662fb7-c7ec-5f43-83c1-849532919194'}) 2026-04-05 00:44:55.462564 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-df39e39b-9449-5ecb-9afa-151663e06960', 'data_vg': 'ceph-df39e39b-9449-5ecb-9afa-151663e06960'}) 2026-04-05 00:44:55.462577 | orchestrator | 2026-04-05 00:44:55.462584 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2026-04-05 00:44:55.462591 | orchestrator | Sunday 05 April 2026 00:44:54 +0000 (0:00:01.731) 0:00:37.996 ********** 2026-04-05 00:44:55.462597 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-84662fb7-c7ec-5f43-83c1-849532919194', 'data_vg': 'ceph-84662fb7-c7ec-5f43-83c1-849532919194'})  2026-04-05 00:44:55.462606 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-df39e39b-9449-5ecb-9afa-151663e06960', 'data_vg': 'ceph-df39e39b-9449-5ecb-9afa-151663e06960'})  2026-04-05 00:44:55.462612 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:44:55.462620 | orchestrator | 2026-04-05 00:44:55.462627 | orchestrator | TASK [Create block LVs] ******************************************************** 2026-04-05 00:44:55.462633 | orchestrator | Sunday 05 April 2026 00:44:54 +0000 (0:00:00.135) 0:00:38.131 ********** 2026-04-05 00:44:55.462639 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-84662fb7-c7ec-5f43-83c1-849532919194', 'data_vg': 'ceph-84662fb7-c7ec-5f43-83c1-849532919194'}) 2026-04-05 00:44:55.462652 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-df39e39b-9449-5ecb-9afa-151663e06960', 'data_vg': 'ceph-df39e39b-9449-5ecb-9afa-151663e06960'}) 2026-04-05 00:45:01.174391 | orchestrator | 2026-04-05 00:45:01.174560 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2026-04-05 00:45:01.174576 | orchestrator | Sunday 05 April 2026 00:44:55 +0000 (0:00:01.152) 0:00:39.284 ********** 2026-04-05 00:45:01.174585 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-84662fb7-c7ec-5f43-83c1-849532919194', 'data_vg': 'ceph-84662fb7-c7ec-5f43-83c1-849532919194'})  2026-04-05 00:45:01.174595 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-df39e39b-9449-5ecb-9afa-151663e06960', 'data_vg': 'ceph-df39e39b-9449-5ecb-9afa-151663e06960'})  2026-04-05 00:45:01.174604 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:45:01.174613 | orchestrator | 2026-04-05 00:45:01.174621 | orchestrator | TASK [Create DB VGs] *********************************************************** 2026-04-05 00:45:01.174629 | orchestrator | Sunday 05 April 2026 00:44:55 +0000 (0:00:00.140) 0:00:39.424 ********** 2026-04-05 00:45:01.174637 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:45:01.174645 | orchestrator | 2026-04-05 00:45:01.174653 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2026-04-05 00:45:01.174660 | orchestrator | Sunday 05 April 2026 00:44:55 +0000 (0:00:00.122) 0:00:39.546 ********** 2026-04-05 00:45:01.174669 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-84662fb7-c7ec-5f43-83c1-849532919194', 'data_vg': 'ceph-84662fb7-c7ec-5f43-83c1-849532919194'})  2026-04-05 00:45:01.174677 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-df39e39b-9449-5ecb-9afa-151663e06960', 'data_vg': 'ceph-df39e39b-9449-5ecb-9afa-151663e06960'})  2026-04-05 00:45:01.174685 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:45:01.174692 | orchestrator | 2026-04-05 00:45:01.174700 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2026-04-05 00:45:01.174708 | orchestrator | Sunday 05 April 2026 00:44:55 +0000 (0:00:00.138) 0:00:39.685 ********** 2026-04-05 00:45:01.174716 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:45:01.174723 | orchestrator | 2026-04-05 00:45:01.174731 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2026-04-05 00:45:01.174739 | orchestrator | Sunday 05 April 2026 00:44:56 +0000 (0:00:00.133) 0:00:39.818 ********** 2026-04-05 00:45:01.174747 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-84662fb7-c7ec-5f43-83c1-849532919194', 'data_vg': 'ceph-84662fb7-c7ec-5f43-83c1-849532919194'})  2026-04-05 00:45:01.174755 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-df39e39b-9449-5ecb-9afa-151663e06960', 'data_vg': 'ceph-df39e39b-9449-5ecb-9afa-151663e06960'})  2026-04-05 00:45:01.174784 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:45:01.174792 | orchestrator | 2026-04-05 00:45:01.174800 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2026-04-05 00:45:01.174808 | orchestrator | Sunday 05 April 2026 00:44:56 +0000 (0:00:00.166) 0:00:39.984 ********** 2026-04-05 00:45:01.174816 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:45:01.174824 | orchestrator | 2026-04-05 00:45:01.174847 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2026-04-05 00:45:01.174855 | orchestrator | Sunday 05 April 2026 00:44:56 +0000 (0:00:00.376) 0:00:40.361 ********** 2026-04-05 00:45:01.174863 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-84662fb7-c7ec-5f43-83c1-849532919194', 'data_vg': 'ceph-84662fb7-c7ec-5f43-83c1-849532919194'})  2026-04-05 00:45:01.174871 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-df39e39b-9449-5ecb-9afa-151663e06960', 'data_vg': 'ceph-df39e39b-9449-5ecb-9afa-151663e06960'})  2026-04-05 00:45:01.174879 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:45:01.174886 | orchestrator | 2026-04-05 00:45:01.174894 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2026-04-05 00:45:01.174902 | orchestrator | Sunday 05 April 2026 00:44:56 +0000 (0:00:00.165) 0:00:40.527 ********** 2026-04-05 00:45:01.174910 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:45:01.174919 | orchestrator | 2026-04-05 00:45:01.174927 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2026-04-05 00:45:01.174936 | orchestrator | Sunday 05 April 2026 00:44:56 +0000 (0:00:00.135) 0:00:40.662 ********** 2026-04-05 00:45:01.174945 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-84662fb7-c7ec-5f43-83c1-849532919194', 'data_vg': 'ceph-84662fb7-c7ec-5f43-83c1-849532919194'})  2026-04-05 00:45:01.174955 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-df39e39b-9449-5ecb-9afa-151663e06960', 'data_vg': 'ceph-df39e39b-9449-5ecb-9afa-151663e06960'})  2026-04-05 00:45:01.174969 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:45:01.174983 | orchestrator | 2026-04-05 00:45:01.174995 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2026-04-05 00:45:01.175008 | orchestrator | Sunday 05 April 2026 00:44:57 +0000 (0:00:00.159) 0:00:40.822 ********** 2026-04-05 00:45:01.175021 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-84662fb7-c7ec-5f43-83c1-849532919194', 'data_vg': 'ceph-84662fb7-c7ec-5f43-83c1-849532919194'})  2026-04-05 00:45:01.175036 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-df39e39b-9449-5ecb-9afa-151663e06960', 'data_vg': 'ceph-df39e39b-9449-5ecb-9afa-151663e06960'})  2026-04-05 00:45:01.175050 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:45:01.175064 | orchestrator | 2026-04-05 00:45:01.175077 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2026-04-05 00:45:01.175103 | orchestrator | Sunday 05 April 2026 00:44:57 +0000 (0:00:00.160) 0:00:40.982 ********** 2026-04-05 00:45:01.175113 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-84662fb7-c7ec-5f43-83c1-849532919194', 'data_vg': 'ceph-84662fb7-c7ec-5f43-83c1-849532919194'})  2026-04-05 00:45:01.175123 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-df39e39b-9449-5ecb-9afa-151663e06960', 'data_vg': 'ceph-df39e39b-9449-5ecb-9afa-151663e06960'})  2026-04-05 00:45:01.175132 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:45:01.175142 | orchestrator | 2026-04-05 00:45:01.175151 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2026-04-05 00:45:01.175160 | orchestrator | Sunday 05 April 2026 00:44:57 +0000 (0:00:00.153) 0:00:41.136 ********** 2026-04-05 00:45:01.175169 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:45:01.175179 | orchestrator | 2026-04-05 00:45:01.175188 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2026-04-05 00:45:01.175197 | orchestrator | Sunday 05 April 2026 00:44:57 +0000 (0:00:00.137) 0:00:41.274 ********** 2026-04-05 00:45:01.175214 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:45:01.175221 | orchestrator | 2026-04-05 00:45:01.175229 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2026-04-05 00:45:01.175242 | orchestrator | Sunday 05 April 2026 00:44:57 +0000 (0:00:00.130) 0:00:41.404 ********** 2026-04-05 00:45:01.175250 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:45:01.175258 | orchestrator | 2026-04-05 00:45:01.175266 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2026-04-05 00:45:01.175274 | orchestrator | Sunday 05 April 2026 00:44:57 +0000 (0:00:00.139) 0:00:41.544 ********** 2026-04-05 00:45:01.175282 | orchestrator | ok: [testbed-node-4] => { 2026-04-05 00:45:01.175290 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2026-04-05 00:45:01.175298 | orchestrator | } 2026-04-05 00:45:01.175306 | orchestrator | 2026-04-05 00:45:01.175314 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2026-04-05 00:45:01.175322 | orchestrator | Sunday 05 April 2026 00:44:57 +0000 (0:00:00.146) 0:00:41.690 ********** 2026-04-05 00:45:01.175329 | orchestrator | ok: [testbed-node-4] => { 2026-04-05 00:45:01.175337 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2026-04-05 00:45:01.175345 | orchestrator | } 2026-04-05 00:45:01.175353 | orchestrator | 2026-04-05 00:45:01.175361 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2026-04-05 00:45:01.175369 | orchestrator | Sunday 05 April 2026 00:44:58 +0000 (0:00:00.142) 0:00:41.832 ********** 2026-04-05 00:45:01.175376 | orchestrator | ok: [testbed-node-4] => { 2026-04-05 00:45:01.175384 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2026-04-05 00:45:01.175393 | orchestrator | } 2026-04-05 00:45:01.175401 | orchestrator | 2026-04-05 00:45:01.175408 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2026-04-05 00:45:01.175416 | orchestrator | Sunday 05 April 2026 00:44:58 +0000 (0:00:00.144) 0:00:41.977 ********** 2026-04-05 00:45:01.175424 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:45:01.175432 | orchestrator | 2026-04-05 00:45:01.175439 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2026-04-05 00:45:01.175447 | orchestrator | Sunday 05 April 2026 00:44:58 +0000 (0:00:00.739) 0:00:42.716 ********** 2026-04-05 00:45:01.175455 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:45:01.175463 | orchestrator | 2026-04-05 00:45:01.175471 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2026-04-05 00:45:01.175502 | orchestrator | Sunday 05 April 2026 00:44:59 +0000 (0:00:00.521) 0:00:43.238 ********** 2026-04-05 00:45:01.175512 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:45:01.175520 | orchestrator | 2026-04-05 00:45:01.175528 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2026-04-05 00:45:01.175535 | orchestrator | Sunday 05 April 2026 00:44:59 +0000 (0:00:00.484) 0:00:43.723 ********** 2026-04-05 00:45:01.175543 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:45:01.175551 | orchestrator | 2026-04-05 00:45:01.175559 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2026-04-05 00:45:01.175566 | orchestrator | Sunday 05 April 2026 00:45:00 +0000 (0:00:00.160) 0:00:43.884 ********** 2026-04-05 00:45:01.175574 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:45:01.175582 | orchestrator | 2026-04-05 00:45:01.175590 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2026-04-05 00:45:01.175598 | orchestrator | Sunday 05 April 2026 00:45:00 +0000 (0:00:00.163) 0:00:44.047 ********** 2026-04-05 00:45:01.175605 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:45:01.175613 | orchestrator | 2026-04-05 00:45:01.175621 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2026-04-05 00:45:01.175629 | orchestrator | Sunday 05 April 2026 00:45:00 +0000 (0:00:00.116) 0:00:44.163 ********** 2026-04-05 00:45:01.175637 | orchestrator | ok: [testbed-node-4] => { 2026-04-05 00:45:01.175645 | orchestrator |  "vgs_report": { 2026-04-05 00:45:01.175653 | orchestrator |  "vg": [] 2026-04-05 00:45:01.175661 | orchestrator |  } 2026-04-05 00:45:01.175669 | orchestrator | } 2026-04-05 00:45:01.175683 | orchestrator | 2026-04-05 00:45:01.175691 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2026-04-05 00:45:01.175699 | orchestrator | Sunday 05 April 2026 00:45:00 +0000 (0:00:00.136) 0:00:44.300 ********** 2026-04-05 00:45:01.175707 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:45:01.175715 | orchestrator | 2026-04-05 00:45:01.175723 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2026-04-05 00:45:01.175730 | orchestrator | Sunday 05 April 2026 00:45:00 +0000 (0:00:00.139) 0:00:44.439 ********** 2026-04-05 00:45:01.175738 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:45:01.175746 | orchestrator | 2026-04-05 00:45:01.175754 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2026-04-05 00:45:01.175761 | orchestrator | Sunday 05 April 2026 00:45:00 +0000 (0:00:00.174) 0:00:44.614 ********** 2026-04-05 00:45:01.175769 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:45:01.175777 | orchestrator | 2026-04-05 00:45:01.175785 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2026-04-05 00:45:01.175793 | orchestrator | Sunday 05 April 2026 00:45:01 +0000 (0:00:00.156) 0:00:44.770 ********** 2026-04-05 00:45:01.175801 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:45:01.175808 | orchestrator | 2026-04-05 00:45:01.175821 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2026-04-05 00:45:06.100156 | orchestrator | Sunday 05 April 2026 00:45:01 +0000 (0:00:00.158) 0:00:44.928 ********** 2026-04-05 00:45:06.100285 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:45:06.100310 | orchestrator | 2026-04-05 00:45:06.100331 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2026-04-05 00:45:06.100350 | orchestrator | Sunday 05 April 2026 00:45:01 +0000 (0:00:00.143) 0:00:45.072 ********** 2026-04-05 00:45:06.100369 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:45:06.100387 | orchestrator | 2026-04-05 00:45:06.100405 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2026-04-05 00:45:06.100425 | orchestrator | Sunday 05 April 2026 00:45:01 +0000 (0:00:00.369) 0:00:45.441 ********** 2026-04-05 00:45:06.100443 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:45:06.100461 | orchestrator | 2026-04-05 00:45:06.100508 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2026-04-05 00:45:06.100528 | orchestrator | Sunday 05 April 2026 00:45:01 +0000 (0:00:00.143) 0:00:45.585 ********** 2026-04-05 00:45:06.100547 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:45:06.100566 | orchestrator | 2026-04-05 00:45:06.100585 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2026-04-05 00:45:06.100603 | orchestrator | Sunday 05 April 2026 00:45:01 +0000 (0:00:00.140) 0:00:45.726 ********** 2026-04-05 00:45:06.100643 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:45:06.100662 | orchestrator | 2026-04-05 00:45:06.100680 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2026-04-05 00:45:06.100699 | orchestrator | Sunday 05 April 2026 00:45:02 +0000 (0:00:00.143) 0:00:45.869 ********** 2026-04-05 00:45:06.100717 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:45:06.100735 | orchestrator | 2026-04-05 00:45:06.100753 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2026-04-05 00:45:06.100771 | orchestrator | Sunday 05 April 2026 00:45:02 +0000 (0:00:00.143) 0:00:46.013 ********** 2026-04-05 00:45:06.100790 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:45:06.100808 | orchestrator | 2026-04-05 00:45:06.100825 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2026-04-05 00:45:06.100845 | orchestrator | Sunday 05 April 2026 00:45:02 +0000 (0:00:00.134) 0:00:46.148 ********** 2026-04-05 00:45:06.100863 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:45:06.100881 | orchestrator | 2026-04-05 00:45:06.100900 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2026-04-05 00:45:06.100918 | orchestrator | Sunday 05 April 2026 00:45:02 +0000 (0:00:00.141) 0:00:46.289 ********** 2026-04-05 00:45:06.100937 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:45:06.100988 | orchestrator | 2026-04-05 00:45:06.101007 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2026-04-05 00:45:06.101025 | orchestrator | Sunday 05 April 2026 00:45:02 +0000 (0:00:00.154) 0:00:46.444 ********** 2026-04-05 00:45:06.101044 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:45:06.101062 | orchestrator | 2026-04-05 00:45:06.101081 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2026-04-05 00:45:06.101099 | orchestrator | Sunday 05 April 2026 00:45:02 +0000 (0:00:00.140) 0:00:46.585 ********** 2026-04-05 00:45:06.101120 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-84662fb7-c7ec-5f43-83c1-849532919194', 'data_vg': 'ceph-84662fb7-c7ec-5f43-83c1-849532919194'})  2026-04-05 00:45:06.101140 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-df39e39b-9449-5ecb-9afa-151663e06960', 'data_vg': 'ceph-df39e39b-9449-5ecb-9afa-151663e06960'})  2026-04-05 00:45:06.101158 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:45:06.101177 | orchestrator | 2026-04-05 00:45:06.101194 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2026-04-05 00:45:06.101211 | orchestrator | Sunday 05 April 2026 00:45:02 +0000 (0:00:00.154) 0:00:46.739 ********** 2026-04-05 00:45:06.101229 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-84662fb7-c7ec-5f43-83c1-849532919194', 'data_vg': 'ceph-84662fb7-c7ec-5f43-83c1-849532919194'})  2026-04-05 00:45:06.101248 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-df39e39b-9449-5ecb-9afa-151663e06960', 'data_vg': 'ceph-df39e39b-9449-5ecb-9afa-151663e06960'})  2026-04-05 00:45:06.101266 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:45:06.101285 | orchestrator | 2026-04-05 00:45:06.101303 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2026-04-05 00:45:06.101322 | orchestrator | Sunday 05 April 2026 00:45:03 +0000 (0:00:00.165) 0:00:46.905 ********** 2026-04-05 00:45:06.101341 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-84662fb7-c7ec-5f43-83c1-849532919194', 'data_vg': 'ceph-84662fb7-c7ec-5f43-83c1-849532919194'})  2026-04-05 00:45:06.101359 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-df39e39b-9449-5ecb-9afa-151663e06960', 'data_vg': 'ceph-df39e39b-9449-5ecb-9afa-151663e06960'})  2026-04-05 00:45:06.101376 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:45:06.101395 | orchestrator | 2026-04-05 00:45:06.101413 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2026-04-05 00:45:06.101430 | orchestrator | Sunday 05 April 2026 00:45:03 +0000 (0:00:00.160) 0:00:47.066 ********** 2026-04-05 00:45:06.101448 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-84662fb7-c7ec-5f43-83c1-849532919194', 'data_vg': 'ceph-84662fb7-c7ec-5f43-83c1-849532919194'})  2026-04-05 00:45:06.101469 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-df39e39b-9449-5ecb-9afa-151663e06960', 'data_vg': 'ceph-df39e39b-9449-5ecb-9afa-151663e06960'})  2026-04-05 00:45:06.101583 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:45:06.101604 | orchestrator | 2026-04-05 00:45:06.101649 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2026-04-05 00:45:06.101669 | orchestrator | Sunday 05 April 2026 00:45:03 +0000 (0:00:00.401) 0:00:47.467 ********** 2026-04-05 00:45:06.101686 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-84662fb7-c7ec-5f43-83c1-849532919194', 'data_vg': 'ceph-84662fb7-c7ec-5f43-83c1-849532919194'})  2026-04-05 00:45:06.101705 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-df39e39b-9449-5ecb-9afa-151663e06960', 'data_vg': 'ceph-df39e39b-9449-5ecb-9afa-151663e06960'})  2026-04-05 00:45:06.101723 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:45:06.101742 | orchestrator | 2026-04-05 00:45:06.101758 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2026-04-05 00:45:06.101774 | orchestrator | Sunday 05 April 2026 00:45:03 +0000 (0:00:00.170) 0:00:47.638 ********** 2026-04-05 00:45:06.101810 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-84662fb7-c7ec-5f43-83c1-849532919194', 'data_vg': 'ceph-84662fb7-c7ec-5f43-83c1-849532919194'})  2026-04-05 00:45:06.101831 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-df39e39b-9449-5ecb-9afa-151663e06960', 'data_vg': 'ceph-df39e39b-9449-5ecb-9afa-151663e06960'})  2026-04-05 00:45:06.101850 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:45:06.101868 | orchestrator | 2026-04-05 00:45:06.101887 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2026-04-05 00:45:06.101906 | orchestrator | Sunday 05 April 2026 00:45:04 +0000 (0:00:00.206) 0:00:47.845 ********** 2026-04-05 00:45:06.101924 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-84662fb7-c7ec-5f43-83c1-849532919194', 'data_vg': 'ceph-84662fb7-c7ec-5f43-83c1-849532919194'})  2026-04-05 00:45:06.101943 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-df39e39b-9449-5ecb-9afa-151663e06960', 'data_vg': 'ceph-df39e39b-9449-5ecb-9afa-151663e06960'})  2026-04-05 00:45:06.101962 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:45:06.101981 | orchestrator | 2026-04-05 00:45:06.101999 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2026-04-05 00:45:06.102091 | orchestrator | Sunday 05 April 2026 00:45:04 +0000 (0:00:00.174) 0:00:48.019 ********** 2026-04-05 00:45:06.102112 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-84662fb7-c7ec-5f43-83c1-849532919194', 'data_vg': 'ceph-84662fb7-c7ec-5f43-83c1-849532919194'})  2026-04-05 00:45:06.102133 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-df39e39b-9449-5ecb-9afa-151663e06960', 'data_vg': 'ceph-df39e39b-9449-5ecb-9afa-151663e06960'})  2026-04-05 00:45:06.102154 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:45:06.102175 | orchestrator | 2026-04-05 00:45:06.102195 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2026-04-05 00:45:06.102216 | orchestrator | Sunday 05 April 2026 00:45:04 +0000 (0:00:00.183) 0:00:48.203 ********** 2026-04-05 00:45:06.102237 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:45:06.102258 | orchestrator | 2026-04-05 00:45:06.102278 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2026-04-05 00:45:06.102299 | orchestrator | Sunday 05 April 2026 00:45:04 +0000 (0:00:00.506) 0:00:48.710 ********** 2026-04-05 00:45:06.102319 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:45:06.102339 | orchestrator | 2026-04-05 00:45:06.102359 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2026-04-05 00:45:06.102378 | orchestrator | Sunday 05 April 2026 00:45:05 +0000 (0:00:00.539) 0:00:49.250 ********** 2026-04-05 00:45:06.102398 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:45:06.102418 | orchestrator | 2026-04-05 00:45:06.102439 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2026-04-05 00:45:06.102460 | orchestrator | Sunday 05 April 2026 00:45:05 +0000 (0:00:00.199) 0:00:49.450 ********** 2026-04-05 00:45:06.102506 | orchestrator | ok: [testbed-node-4] => (item={'lv_name': 'osd-block-84662fb7-c7ec-5f43-83c1-849532919194', 'vg_name': 'ceph-84662fb7-c7ec-5f43-83c1-849532919194'}) 2026-04-05 00:45:06.102528 | orchestrator | ok: [testbed-node-4] => (item={'lv_name': 'osd-block-df39e39b-9449-5ecb-9afa-151663e06960', 'vg_name': 'ceph-df39e39b-9449-5ecb-9afa-151663e06960'}) 2026-04-05 00:45:06.102545 | orchestrator | 2026-04-05 00:45:06.102564 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2026-04-05 00:45:06.102582 | orchestrator | Sunday 05 April 2026 00:45:05 +0000 (0:00:00.163) 0:00:49.613 ********** 2026-04-05 00:45:06.102600 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-84662fb7-c7ec-5f43-83c1-849532919194', 'data_vg': 'ceph-84662fb7-c7ec-5f43-83c1-849532919194'})  2026-04-05 00:45:06.102674 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-df39e39b-9449-5ecb-9afa-151663e06960', 'data_vg': 'ceph-df39e39b-9449-5ecb-9afa-151663e06960'})  2026-04-05 00:45:06.102693 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:45:06.102724 | orchestrator | 2026-04-05 00:45:06.102742 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2026-04-05 00:45:06.102759 | orchestrator | Sunday 05 April 2026 00:45:06 +0000 (0:00:00.168) 0:00:49.782 ********** 2026-04-05 00:45:06.102777 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-84662fb7-c7ec-5f43-83c1-849532919194', 'data_vg': 'ceph-84662fb7-c7ec-5f43-83c1-849532919194'})  2026-04-05 00:45:06.102809 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-df39e39b-9449-5ecb-9afa-151663e06960', 'data_vg': 'ceph-df39e39b-9449-5ecb-9afa-151663e06960'})  2026-04-05 00:45:11.762275 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:45:11.762353 | orchestrator | 2026-04-05 00:45:11.762368 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2026-04-05 00:45:11.762378 | orchestrator | Sunday 05 April 2026 00:45:06 +0000 (0:00:00.168) 0:00:49.951 ********** 2026-04-05 00:45:11.762389 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-84662fb7-c7ec-5f43-83c1-849532919194', 'data_vg': 'ceph-84662fb7-c7ec-5f43-83c1-849532919194'})  2026-04-05 00:45:11.762399 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-df39e39b-9449-5ecb-9afa-151663e06960', 'data_vg': 'ceph-df39e39b-9449-5ecb-9afa-151663e06960'})  2026-04-05 00:45:11.762409 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:45:11.762418 | orchestrator | 2026-04-05 00:45:11.762428 | orchestrator | TASK [Print LVM report data] *************************************************** 2026-04-05 00:45:11.762438 | orchestrator | Sunday 05 April 2026 00:45:06 +0000 (0:00:00.195) 0:00:50.146 ********** 2026-04-05 00:45:11.762448 | orchestrator | ok: [testbed-node-4] => { 2026-04-05 00:45:11.762457 | orchestrator |  "lvm_report": { 2026-04-05 00:45:11.762468 | orchestrator |  "lv": [ 2026-04-05 00:45:11.762518 | orchestrator |  { 2026-04-05 00:45:11.762531 | orchestrator |  "lv_name": "osd-block-84662fb7-c7ec-5f43-83c1-849532919194", 2026-04-05 00:45:11.762543 | orchestrator |  "vg_name": "ceph-84662fb7-c7ec-5f43-83c1-849532919194" 2026-04-05 00:45:11.762554 | orchestrator |  }, 2026-04-05 00:45:11.762564 | orchestrator |  { 2026-04-05 00:45:11.762575 | orchestrator |  "lv_name": "osd-block-df39e39b-9449-5ecb-9afa-151663e06960", 2026-04-05 00:45:11.762585 | orchestrator |  "vg_name": "ceph-df39e39b-9449-5ecb-9afa-151663e06960" 2026-04-05 00:45:11.762596 | orchestrator |  } 2026-04-05 00:45:11.762606 | orchestrator |  ], 2026-04-05 00:45:11.762616 | orchestrator |  "pv": [ 2026-04-05 00:45:11.762626 | orchestrator |  { 2026-04-05 00:45:11.762636 | orchestrator |  "pv_name": "/dev/sdb", 2026-04-05 00:45:11.762646 | orchestrator |  "vg_name": "ceph-84662fb7-c7ec-5f43-83c1-849532919194" 2026-04-05 00:45:11.762657 | orchestrator |  }, 2026-04-05 00:45:11.762666 | orchestrator |  { 2026-04-05 00:45:11.762676 | orchestrator |  "pv_name": "/dev/sdc", 2026-04-05 00:45:11.762687 | orchestrator |  "vg_name": "ceph-df39e39b-9449-5ecb-9afa-151663e06960" 2026-04-05 00:45:11.762698 | orchestrator |  } 2026-04-05 00:45:11.762709 | orchestrator |  ] 2026-04-05 00:45:11.762718 | orchestrator |  } 2026-04-05 00:45:11.762729 | orchestrator | } 2026-04-05 00:45:11.762739 | orchestrator | 2026-04-05 00:45:11.762749 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2026-04-05 00:45:11.762760 | orchestrator | 2026-04-05 00:45:11.762770 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-04-05 00:45:11.762780 | orchestrator | Sunday 05 April 2026 00:45:06 +0000 (0:00:00.513) 0:00:50.660 ********** 2026-04-05 00:45:11.762791 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2026-04-05 00:45:11.762801 | orchestrator | 2026-04-05 00:45:11.762812 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-04-05 00:45:11.762822 | orchestrator | Sunday 05 April 2026 00:45:07 +0000 (0:00:00.250) 0:00:50.911 ********** 2026-04-05 00:45:11.762848 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:45:11.762859 | orchestrator | 2026-04-05 00:45:11.762871 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-05 00:45:11.762882 | orchestrator | Sunday 05 April 2026 00:45:07 +0000 (0:00:00.220) 0:00:51.132 ********** 2026-04-05 00:45:11.762892 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop0) 2026-04-05 00:45:11.762913 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop1) 2026-04-05 00:45:11.762931 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop2) 2026-04-05 00:45:11.762945 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop3) 2026-04-05 00:45:11.762954 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop4) 2026-04-05 00:45:11.762963 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop5) 2026-04-05 00:45:11.762973 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop6) 2026-04-05 00:45:11.762983 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop7) 2026-04-05 00:45:11.762993 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sda) 2026-04-05 00:45:11.763003 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdb) 2026-04-05 00:45:11.763012 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdc) 2026-04-05 00:45:11.763022 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdd) 2026-04-05 00:45:11.763032 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sr0) 2026-04-05 00:45:11.763042 | orchestrator | 2026-04-05 00:45:11.763052 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-05 00:45:11.763062 | orchestrator | Sunday 05 April 2026 00:45:07 +0000 (0:00:00.380) 0:00:51.512 ********** 2026-04-05 00:45:11.763071 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:45:11.763080 | orchestrator | 2026-04-05 00:45:11.763089 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-05 00:45:11.763098 | orchestrator | Sunday 05 April 2026 00:45:07 +0000 (0:00:00.189) 0:00:51.702 ********** 2026-04-05 00:45:11.763108 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:45:11.763117 | orchestrator | 2026-04-05 00:45:11.763127 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-05 00:45:11.763150 | orchestrator | Sunday 05 April 2026 00:45:08 +0000 (0:00:00.188) 0:00:51.891 ********** 2026-04-05 00:45:11.763160 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:45:11.763170 | orchestrator | 2026-04-05 00:45:11.763179 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-05 00:45:11.763188 | orchestrator | Sunday 05 April 2026 00:45:08 +0000 (0:00:00.183) 0:00:52.074 ********** 2026-04-05 00:45:11.763197 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:45:11.763206 | orchestrator | 2026-04-05 00:45:11.763216 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-05 00:45:11.763225 | orchestrator | Sunday 05 April 2026 00:45:08 +0000 (0:00:00.183) 0:00:52.257 ********** 2026-04-05 00:45:11.763235 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:45:11.763245 | orchestrator | 2026-04-05 00:45:11.763254 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-05 00:45:11.763263 | orchestrator | Sunday 05 April 2026 00:45:08 +0000 (0:00:00.194) 0:00:52.451 ********** 2026-04-05 00:45:11.763273 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:45:11.763282 | orchestrator | 2026-04-05 00:45:11.763292 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-05 00:45:11.763307 | orchestrator | Sunday 05 April 2026 00:45:09 +0000 (0:00:00.464) 0:00:52.916 ********** 2026-04-05 00:45:11.763317 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:45:11.763333 | orchestrator | 2026-04-05 00:45:11.763343 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-05 00:45:11.763352 | orchestrator | Sunday 05 April 2026 00:45:09 +0000 (0:00:00.186) 0:00:53.102 ********** 2026-04-05 00:45:11.763362 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:45:11.763371 | orchestrator | 2026-04-05 00:45:11.763381 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-05 00:45:11.763390 | orchestrator | Sunday 05 April 2026 00:45:09 +0000 (0:00:00.195) 0:00:53.297 ********** 2026-04-05 00:45:11.763399 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_af4f9d3c-bb5f-4922-ae65-7a1b824f675e) 2026-04-05 00:45:11.763409 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_af4f9d3c-bb5f-4922-ae65-7a1b824f675e) 2026-04-05 00:45:11.763419 | orchestrator | 2026-04-05 00:45:11.763428 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-05 00:45:11.763438 | orchestrator | Sunday 05 April 2026 00:45:09 +0000 (0:00:00.377) 0:00:53.674 ********** 2026-04-05 00:45:11.763448 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_50c87a36-4bc6-4e8b-871c-1038d731a8f6) 2026-04-05 00:45:11.763457 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_50c87a36-4bc6-4e8b-871c-1038d731a8f6) 2026-04-05 00:45:11.763466 | orchestrator | 2026-04-05 00:45:11.763476 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-05 00:45:11.763500 | orchestrator | Sunday 05 April 2026 00:45:10 +0000 (0:00:00.380) 0:00:54.055 ********** 2026-04-05 00:45:11.763510 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_16d4ab4f-df2e-4494-9775-e59359a49379) 2026-04-05 00:45:11.763520 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_16d4ab4f-df2e-4494-9775-e59359a49379) 2026-04-05 00:45:11.763529 | orchestrator | 2026-04-05 00:45:11.763539 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-05 00:45:11.763548 | orchestrator | Sunday 05 April 2026 00:45:10 +0000 (0:00:00.395) 0:00:54.451 ********** 2026-04-05 00:45:11.763557 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_f126a5b7-c683-4f0b-86b6-1ac9b9bcd2da) 2026-04-05 00:45:11.763567 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_f126a5b7-c683-4f0b-86b6-1ac9b9bcd2da) 2026-04-05 00:45:11.763576 | orchestrator | 2026-04-05 00:45:11.763585 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-05 00:45:11.763595 | orchestrator | Sunday 05 April 2026 00:45:11 +0000 (0:00:00.395) 0:00:54.847 ********** 2026-04-05 00:45:11.763604 | orchestrator | ok: [testbed-node-5] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-04-05 00:45:11.763614 | orchestrator | 2026-04-05 00:45:11.763624 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-05 00:45:11.763633 | orchestrator | Sunday 05 April 2026 00:45:11 +0000 (0:00:00.310) 0:00:55.157 ********** 2026-04-05 00:45:11.763642 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop0) 2026-04-05 00:45:11.763652 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop1) 2026-04-05 00:45:11.763661 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop2) 2026-04-05 00:45:11.763672 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop3) 2026-04-05 00:45:11.763681 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop4) 2026-04-05 00:45:11.763690 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop5) 2026-04-05 00:45:11.763699 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop6) 2026-04-05 00:45:11.763708 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop7) 2026-04-05 00:45:11.763717 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sda) 2026-04-05 00:45:11.763734 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdb) 2026-04-05 00:45:11.763744 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdc) 2026-04-05 00:45:11.763760 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdd) 2026-04-05 00:45:20.292078 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sr0) 2026-04-05 00:45:20.292164 | orchestrator | 2026-04-05 00:45:20.292173 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-05 00:45:20.292179 | orchestrator | Sunday 05 April 2026 00:45:11 +0000 (0:00:00.440) 0:00:55.598 ********** 2026-04-05 00:45:20.292185 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:45:20.292191 | orchestrator | 2026-04-05 00:45:20.292196 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-05 00:45:20.292202 | orchestrator | Sunday 05 April 2026 00:45:12 +0000 (0:00:00.168) 0:00:55.766 ********** 2026-04-05 00:45:20.292207 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:45:20.292212 | orchestrator | 2026-04-05 00:45:20.292217 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-05 00:45:20.292222 | orchestrator | Sunday 05 April 2026 00:45:12 +0000 (0:00:00.187) 0:00:55.954 ********** 2026-04-05 00:45:20.292227 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:45:20.292233 | orchestrator | 2026-04-05 00:45:20.292238 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-05 00:45:20.292254 | orchestrator | Sunday 05 April 2026 00:45:12 +0000 (0:00:00.497) 0:00:56.452 ********** 2026-04-05 00:45:20.292260 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:45:20.292265 | orchestrator | 2026-04-05 00:45:20.292270 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-05 00:45:20.292275 | orchestrator | Sunday 05 April 2026 00:45:12 +0000 (0:00:00.190) 0:00:56.643 ********** 2026-04-05 00:45:20.292280 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:45:20.292285 | orchestrator | 2026-04-05 00:45:20.292290 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-05 00:45:20.292295 | orchestrator | Sunday 05 April 2026 00:45:13 +0000 (0:00:00.179) 0:00:56.823 ********** 2026-04-05 00:45:20.292300 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:45:20.292305 | orchestrator | 2026-04-05 00:45:20.292310 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-05 00:45:20.292316 | orchestrator | Sunday 05 April 2026 00:45:13 +0000 (0:00:00.206) 0:00:57.029 ********** 2026-04-05 00:45:20.292321 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:45:20.292326 | orchestrator | 2026-04-05 00:45:20.292331 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-05 00:45:20.292336 | orchestrator | Sunday 05 April 2026 00:45:13 +0000 (0:00:00.203) 0:00:57.232 ********** 2026-04-05 00:45:20.292341 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:45:20.292346 | orchestrator | 2026-04-05 00:45:20.292351 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-05 00:45:20.292359 | orchestrator | Sunday 05 April 2026 00:45:13 +0000 (0:00:00.211) 0:00:57.443 ********** 2026-04-05 00:45:20.292368 | orchestrator | ok: [testbed-node-5] => (item=sda1) 2026-04-05 00:45:20.292377 | orchestrator | ok: [testbed-node-5] => (item=sda14) 2026-04-05 00:45:20.292386 | orchestrator | ok: [testbed-node-5] => (item=sda15) 2026-04-05 00:45:20.292394 | orchestrator | ok: [testbed-node-5] => (item=sda16) 2026-04-05 00:45:20.292401 | orchestrator | 2026-04-05 00:45:20.292407 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-05 00:45:20.292412 | orchestrator | Sunday 05 April 2026 00:45:14 +0000 (0:00:00.664) 0:00:58.108 ********** 2026-04-05 00:45:20.292417 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:45:20.292422 | orchestrator | 2026-04-05 00:45:20.292427 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-05 00:45:20.292453 | orchestrator | Sunday 05 April 2026 00:45:14 +0000 (0:00:00.182) 0:00:58.291 ********** 2026-04-05 00:45:20.292463 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:45:20.292472 | orchestrator | 2026-04-05 00:45:20.292511 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-05 00:45:20.292522 | orchestrator | Sunday 05 April 2026 00:45:14 +0000 (0:00:00.201) 0:00:58.493 ********** 2026-04-05 00:45:20.292530 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:45:20.292537 | orchestrator | 2026-04-05 00:45:20.292547 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-05 00:45:20.292554 | orchestrator | Sunday 05 April 2026 00:45:14 +0000 (0:00:00.202) 0:00:58.695 ********** 2026-04-05 00:45:20.292562 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:45:20.292570 | orchestrator | 2026-04-05 00:45:20.292578 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2026-04-05 00:45:20.292587 | orchestrator | Sunday 05 April 2026 00:45:15 +0000 (0:00:00.187) 0:00:58.883 ********** 2026-04-05 00:45:20.292595 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:45:20.292604 | orchestrator | 2026-04-05 00:45:20.292621 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2026-04-05 00:45:20.292629 | orchestrator | Sunday 05 April 2026 00:45:15 +0000 (0:00:00.361) 0:00:59.245 ********** 2026-04-05 00:45:20.292637 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '01ae77dd-7b74-52e9-8a2e-c19e3ec8ad7a'}}) 2026-04-05 00:45:20.292646 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '1dbeab33-88c6-544f-8f85-2175dc04d523'}}) 2026-04-05 00:45:20.292654 | orchestrator | 2026-04-05 00:45:20.292662 | orchestrator | TASK [Create block VGs] ******************************************************** 2026-04-05 00:45:20.292672 | orchestrator | Sunday 05 April 2026 00:45:15 +0000 (0:00:00.197) 0:00:59.442 ********** 2026-04-05 00:45:20.292681 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-01ae77dd-7b74-52e9-8a2e-c19e3ec8ad7a', 'data_vg': 'ceph-01ae77dd-7b74-52e9-8a2e-c19e3ec8ad7a'}) 2026-04-05 00:45:20.292692 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-1dbeab33-88c6-544f-8f85-2175dc04d523', 'data_vg': 'ceph-1dbeab33-88c6-544f-8f85-2175dc04d523'}) 2026-04-05 00:45:20.292701 | orchestrator | 2026-04-05 00:45:20.292711 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2026-04-05 00:45:20.292737 | orchestrator | Sunday 05 April 2026 00:45:17 +0000 (0:00:01.871) 0:01:01.314 ********** 2026-04-05 00:45:20.292747 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-01ae77dd-7b74-52e9-8a2e-c19e3ec8ad7a', 'data_vg': 'ceph-01ae77dd-7b74-52e9-8a2e-c19e3ec8ad7a'})  2026-04-05 00:45:20.292757 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-1dbeab33-88c6-544f-8f85-2175dc04d523', 'data_vg': 'ceph-1dbeab33-88c6-544f-8f85-2175dc04d523'})  2026-04-05 00:45:20.292767 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:45:20.292777 | orchestrator | 2026-04-05 00:45:20.292786 | orchestrator | TASK [Create block LVs] ******************************************************** 2026-04-05 00:45:20.292796 | orchestrator | Sunday 05 April 2026 00:45:17 +0000 (0:00:00.161) 0:01:01.475 ********** 2026-04-05 00:45:20.292805 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-01ae77dd-7b74-52e9-8a2e-c19e3ec8ad7a', 'data_vg': 'ceph-01ae77dd-7b74-52e9-8a2e-c19e3ec8ad7a'}) 2026-04-05 00:45:20.292816 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-1dbeab33-88c6-544f-8f85-2175dc04d523', 'data_vg': 'ceph-1dbeab33-88c6-544f-8f85-2175dc04d523'}) 2026-04-05 00:45:20.292826 | orchestrator | 2026-04-05 00:45:20.292835 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2026-04-05 00:45:20.292845 | orchestrator | Sunday 05 April 2026 00:45:19 +0000 (0:00:01.303) 0:01:02.779 ********** 2026-04-05 00:45:20.292855 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-01ae77dd-7b74-52e9-8a2e-c19e3ec8ad7a', 'data_vg': 'ceph-01ae77dd-7b74-52e9-8a2e-c19e3ec8ad7a'})  2026-04-05 00:45:20.292873 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-1dbeab33-88c6-544f-8f85-2175dc04d523', 'data_vg': 'ceph-1dbeab33-88c6-544f-8f85-2175dc04d523'})  2026-04-05 00:45:20.292883 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:45:20.292892 | orchestrator | 2026-04-05 00:45:20.292902 | orchestrator | TASK [Create DB VGs] *********************************************************** 2026-04-05 00:45:20.292911 | orchestrator | Sunday 05 April 2026 00:45:19 +0000 (0:00:00.160) 0:01:02.939 ********** 2026-04-05 00:45:20.292920 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:45:20.292928 | orchestrator | 2026-04-05 00:45:20.292936 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2026-04-05 00:45:20.292944 | orchestrator | Sunday 05 April 2026 00:45:19 +0000 (0:00:00.148) 0:01:03.087 ********** 2026-04-05 00:45:20.292952 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-01ae77dd-7b74-52e9-8a2e-c19e3ec8ad7a', 'data_vg': 'ceph-01ae77dd-7b74-52e9-8a2e-c19e3ec8ad7a'})  2026-04-05 00:45:20.292961 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-1dbeab33-88c6-544f-8f85-2175dc04d523', 'data_vg': 'ceph-1dbeab33-88c6-544f-8f85-2175dc04d523'})  2026-04-05 00:45:20.292968 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:45:20.292973 | orchestrator | 2026-04-05 00:45:20.292978 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2026-04-05 00:45:20.292983 | orchestrator | Sunday 05 April 2026 00:45:19 +0000 (0:00:00.160) 0:01:03.248 ********** 2026-04-05 00:45:20.292988 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:45:20.292993 | orchestrator | 2026-04-05 00:45:20.292998 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2026-04-05 00:45:20.293012 | orchestrator | Sunday 05 April 2026 00:45:19 +0000 (0:00:00.137) 0:01:03.385 ********** 2026-04-05 00:45:20.293017 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-01ae77dd-7b74-52e9-8a2e-c19e3ec8ad7a', 'data_vg': 'ceph-01ae77dd-7b74-52e9-8a2e-c19e3ec8ad7a'})  2026-04-05 00:45:20.293022 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-1dbeab33-88c6-544f-8f85-2175dc04d523', 'data_vg': 'ceph-1dbeab33-88c6-544f-8f85-2175dc04d523'})  2026-04-05 00:45:20.293027 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:45:20.293033 | orchestrator | 2026-04-05 00:45:20.293038 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2026-04-05 00:45:20.293043 | orchestrator | Sunday 05 April 2026 00:45:19 +0000 (0:00:00.162) 0:01:03.548 ********** 2026-04-05 00:45:20.293048 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:45:20.293053 | orchestrator | 2026-04-05 00:45:20.293058 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2026-04-05 00:45:20.293067 | orchestrator | Sunday 05 April 2026 00:45:19 +0000 (0:00:00.140) 0:01:03.688 ********** 2026-04-05 00:45:20.293076 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-01ae77dd-7b74-52e9-8a2e-c19e3ec8ad7a', 'data_vg': 'ceph-01ae77dd-7b74-52e9-8a2e-c19e3ec8ad7a'})  2026-04-05 00:45:20.293085 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-1dbeab33-88c6-544f-8f85-2175dc04d523', 'data_vg': 'ceph-1dbeab33-88c6-544f-8f85-2175dc04d523'})  2026-04-05 00:45:20.293090 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:45:20.293095 | orchestrator | 2026-04-05 00:45:20.293100 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2026-04-05 00:45:20.293105 | orchestrator | Sunday 05 April 2026 00:45:20 +0000 (0:00:00.149) 0:01:03.838 ********** 2026-04-05 00:45:20.293110 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:45:20.293116 | orchestrator | 2026-04-05 00:45:20.293122 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2026-04-05 00:45:20.293131 | orchestrator | Sunday 05 April 2026 00:45:20 +0000 (0:00:00.140) 0:01:03.978 ********** 2026-04-05 00:45:20.293147 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-01ae77dd-7b74-52e9-8a2e-c19e3ec8ad7a', 'data_vg': 'ceph-01ae77dd-7b74-52e9-8a2e-c19e3ec8ad7a'})  2026-04-05 00:45:26.842574 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-1dbeab33-88c6-544f-8f85-2175dc04d523', 'data_vg': 'ceph-1dbeab33-88c6-544f-8f85-2175dc04d523'})  2026-04-05 00:45:26.842706 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:45:26.842723 | orchestrator | 2026-04-05 00:45:26.842736 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2026-04-05 00:45:26.842750 | orchestrator | Sunday 05 April 2026 00:45:20 +0000 (0:00:00.381) 0:01:04.360 ********** 2026-04-05 00:45:26.842761 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-01ae77dd-7b74-52e9-8a2e-c19e3ec8ad7a', 'data_vg': 'ceph-01ae77dd-7b74-52e9-8a2e-c19e3ec8ad7a'})  2026-04-05 00:45:26.842773 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-1dbeab33-88c6-544f-8f85-2175dc04d523', 'data_vg': 'ceph-1dbeab33-88c6-544f-8f85-2175dc04d523'})  2026-04-05 00:45:26.842784 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:45:26.842795 | orchestrator | 2026-04-05 00:45:26.842823 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2026-04-05 00:45:26.842834 | orchestrator | Sunday 05 April 2026 00:45:20 +0000 (0:00:00.174) 0:01:04.535 ********** 2026-04-05 00:45:26.842845 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-01ae77dd-7b74-52e9-8a2e-c19e3ec8ad7a', 'data_vg': 'ceph-01ae77dd-7b74-52e9-8a2e-c19e3ec8ad7a'})  2026-04-05 00:45:26.842856 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-1dbeab33-88c6-544f-8f85-2175dc04d523', 'data_vg': 'ceph-1dbeab33-88c6-544f-8f85-2175dc04d523'})  2026-04-05 00:45:26.842867 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:45:26.842878 | orchestrator | 2026-04-05 00:45:26.842889 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2026-04-05 00:45:26.842900 | orchestrator | Sunday 05 April 2026 00:45:20 +0000 (0:00:00.144) 0:01:04.680 ********** 2026-04-05 00:45:26.842911 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:45:26.842921 | orchestrator | 2026-04-05 00:45:26.842932 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2026-04-05 00:45:26.842943 | orchestrator | Sunday 05 April 2026 00:45:21 +0000 (0:00:00.125) 0:01:04.805 ********** 2026-04-05 00:45:26.842954 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:45:26.842965 | orchestrator | 2026-04-05 00:45:26.842976 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2026-04-05 00:45:26.842987 | orchestrator | Sunday 05 April 2026 00:45:21 +0000 (0:00:00.147) 0:01:04.953 ********** 2026-04-05 00:45:26.842998 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:45:26.843009 | orchestrator | 2026-04-05 00:45:26.843021 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2026-04-05 00:45:26.843031 | orchestrator | Sunday 05 April 2026 00:45:21 +0000 (0:00:00.153) 0:01:05.107 ********** 2026-04-05 00:45:26.843042 | orchestrator | ok: [testbed-node-5] => { 2026-04-05 00:45:26.843054 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2026-04-05 00:45:26.843066 | orchestrator | } 2026-04-05 00:45:26.843079 | orchestrator | 2026-04-05 00:45:26.843092 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2026-04-05 00:45:26.843105 | orchestrator | Sunday 05 April 2026 00:45:21 +0000 (0:00:00.144) 0:01:05.251 ********** 2026-04-05 00:45:26.843117 | orchestrator | ok: [testbed-node-5] => { 2026-04-05 00:45:26.843131 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2026-04-05 00:45:26.843144 | orchestrator | } 2026-04-05 00:45:26.843156 | orchestrator | 2026-04-05 00:45:26.843169 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2026-04-05 00:45:26.843182 | orchestrator | Sunday 05 April 2026 00:45:21 +0000 (0:00:00.136) 0:01:05.387 ********** 2026-04-05 00:45:26.843195 | orchestrator | ok: [testbed-node-5] => { 2026-04-05 00:45:26.843208 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2026-04-05 00:45:26.843219 | orchestrator | } 2026-04-05 00:45:26.843230 | orchestrator | 2026-04-05 00:45:26.843241 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2026-04-05 00:45:26.843252 | orchestrator | Sunday 05 April 2026 00:45:21 +0000 (0:00:00.150) 0:01:05.538 ********** 2026-04-05 00:45:26.843283 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:45:26.843294 | orchestrator | 2026-04-05 00:45:26.843305 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2026-04-05 00:45:26.843317 | orchestrator | Sunday 05 April 2026 00:45:22 +0000 (0:00:00.506) 0:01:06.045 ********** 2026-04-05 00:45:26.843327 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:45:26.843338 | orchestrator | 2026-04-05 00:45:26.843349 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2026-04-05 00:45:26.843360 | orchestrator | Sunday 05 April 2026 00:45:22 +0000 (0:00:00.524) 0:01:06.569 ********** 2026-04-05 00:45:26.843371 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:45:26.843381 | orchestrator | 2026-04-05 00:45:26.843392 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2026-04-05 00:45:26.843403 | orchestrator | Sunday 05 April 2026 00:45:23 +0000 (0:00:00.556) 0:01:07.126 ********** 2026-04-05 00:45:26.843414 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:45:26.843425 | orchestrator | 2026-04-05 00:45:26.843436 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2026-04-05 00:45:26.843447 | orchestrator | Sunday 05 April 2026 00:45:23 +0000 (0:00:00.381) 0:01:07.507 ********** 2026-04-05 00:45:26.843458 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:45:26.843468 | orchestrator | 2026-04-05 00:45:26.843479 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2026-04-05 00:45:26.843563 | orchestrator | Sunday 05 April 2026 00:45:23 +0000 (0:00:00.102) 0:01:07.610 ********** 2026-04-05 00:45:26.843583 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:45:26.843602 | orchestrator | 2026-04-05 00:45:26.843616 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2026-04-05 00:45:26.843627 | orchestrator | Sunday 05 April 2026 00:45:23 +0000 (0:00:00.104) 0:01:07.715 ********** 2026-04-05 00:45:26.843638 | orchestrator | ok: [testbed-node-5] => { 2026-04-05 00:45:26.843649 | orchestrator |  "vgs_report": { 2026-04-05 00:45:26.843661 | orchestrator |  "vg": [] 2026-04-05 00:45:26.843690 | orchestrator |  } 2026-04-05 00:45:26.843702 | orchestrator | } 2026-04-05 00:45:26.843713 | orchestrator | 2026-04-05 00:45:26.843724 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2026-04-05 00:45:26.843735 | orchestrator | Sunday 05 April 2026 00:45:24 +0000 (0:00:00.152) 0:01:07.868 ********** 2026-04-05 00:45:26.843746 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:45:26.843757 | orchestrator | 2026-04-05 00:45:26.843768 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2026-04-05 00:45:26.843779 | orchestrator | Sunday 05 April 2026 00:45:24 +0000 (0:00:00.137) 0:01:08.005 ********** 2026-04-05 00:45:26.843790 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:45:26.843800 | orchestrator | 2026-04-05 00:45:26.843811 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2026-04-05 00:45:26.843822 | orchestrator | Sunday 05 April 2026 00:45:24 +0000 (0:00:00.151) 0:01:08.156 ********** 2026-04-05 00:45:26.843833 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:45:26.843843 | orchestrator | 2026-04-05 00:45:26.843854 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2026-04-05 00:45:26.843872 | orchestrator | Sunday 05 April 2026 00:45:24 +0000 (0:00:00.132) 0:01:08.289 ********** 2026-04-05 00:45:26.843883 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:45:26.843894 | orchestrator | 2026-04-05 00:45:26.843905 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2026-04-05 00:45:26.843916 | orchestrator | Sunday 05 April 2026 00:45:24 +0000 (0:00:00.129) 0:01:08.418 ********** 2026-04-05 00:45:26.843926 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:45:26.843937 | orchestrator | 2026-04-05 00:45:26.843948 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2026-04-05 00:45:26.843959 | orchestrator | Sunday 05 April 2026 00:45:24 +0000 (0:00:00.139) 0:01:08.557 ********** 2026-04-05 00:45:26.843969 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:45:26.843989 | orchestrator | 2026-04-05 00:45:26.844001 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2026-04-05 00:45:26.844012 | orchestrator | Sunday 05 April 2026 00:45:24 +0000 (0:00:00.150) 0:01:08.708 ********** 2026-04-05 00:45:26.844022 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:45:26.844033 | orchestrator | 2026-04-05 00:45:26.844044 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2026-04-05 00:45:26.844055 | orchestrator | Sunday 05 April 2026 00:45:25 +0000 (0:00:00.143) 0:01:08.851 ********** 2026-04-05 00:45:26.844065 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:45:26.844076 | orchestrator | 2026-04-05 00:45:26.844087 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2026-04-05 00:45:26.844098 | orchestrator | Sunday 05 April 2026 00:45:25 +0000 (0:00:00.156) 0:01:09.008 ********** 2026-04-05 00:45:26.844109 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:45:26.844120 | orchestrator | 2026-04-05 00:45:26.844131 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2026-04-05 00:45:26.844142 | orchestrator | Sunday 05 April 2026 00:45:25 +0000 (0:00:00.366) 0:01:09.374 ********** 2026-04-05 00:45:26.844153 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:45:26.844164 | orchestrator | 2026-04-05 00:45:26.844175 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2026-04-05 00:45:26.844186 | orchestrator | Sunday 05 April 2026 00:45:25 +0000 (0:00:00.190) 0:01:09.564 ********** 2026-04-05 00:45:26.844196 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:45:26.844207 | orchestrator | 2026-04-05 00:45:26.844218 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2026-04-05 00:45:26.844229 | orchestrator | Sunday 05 April 2026 00:45:25 +0000 (0:00:00.148) 0:01:09.713 ********** 2026-04-05 00:45:26.844240 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:45:26.844251 | orchestrator | 2026-04-05 00:45:26.844262 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2026-04-05 00:45:26.844273 | orchestrator | Sunday 05 April 2026 00:45:26 +0000 (0:00:00.178) 0:01:09.891 ********** 2026-04-05 00:45:26.844284 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:45:26.844295 | orchestrator | 2026-04-05 00:45:26.844306 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2026-04-05 00:45:26.844317 | orchestrator | Sunday 05 April 2026 00:45:26 +0000 (0:00:00.140) 0:01:10.032 ********** 2026-04-05 00:45:26.844327 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:45:26.844338 | orchestrator | 2026-04-05 00:45:26.844349 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2026-04-05 00:45:26.844360 | orchestrator | Sunday 05 April 2026 00:45:26 +0000 (0:00:00.145) 0:01:10.178 ********** 2026-04-05 00:45:26.844371 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-01ae77dd-7b74-52e9-8a2e-c19e3ec8ad7a', 'data_vg': 'ceph-01ae77dd-7b74-52e9-8a2e-c19e3ec8ad7a'})  2026-04-05 00:45:26.844383 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-1dbeab33-88c6-544f-8f85-2175dc04d523', 'data_vg': 'ceph-1dbeab33-88c6-544f-8f85-2175dc04d523'})  2026-04-05 00:45:26.844394 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:45:26.844405 | orchestrator | 2026-04-05 00:45:26.844416 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2026-04-05 00:45:26.844426 | orchestrator | Sunday 05 April 2026 00:45:26 +0000 (0:00:00.185) 0:01:10.363 ********** 2026-04-05 00:45:26.844437 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-01ae77dd-7b74-52e9-8a2e-c19e3ec8ad7a', 'data_vg': 'ceph-01ae77dd-7b74-52e9-8a2e-c19e3ec8ad7a'})  2026-04-05 00:45:26.844448 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-1dbeab33-88c6-544f-8f85-2175dc04d523', 'data_vg': 'ceph-1dbeab33-88c6-544f-8f85-2175dc04d523'})  2026-04-05 00:45:26.844459 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:45:26.844470 | orchestrator | 2026-04-05 00:45:26.844481 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2026-04-05 00:45:26.844519 | orchestrator | Sunday 05 April 2026 00:45:26 +0000 (0:00:00.169) 0:01:10.533 ********** 2026-04-05 00:45:26.844537 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-01ae77dd-7b74-52e9-8a2e-c19e3ec8ad7a', 'data_vg': 'ceph-01ae77dd-7b74-52e9-8a2e-c19e3ec8ad7a'})  2026-04-05 00:45:29.888278 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-1dbeab33-88c6-544f-8f85-2175dc04d523', 'data_vg': 'ceph-1dbeab33-88c6-544f-8f85-2175dc04d523'})  2026-04-05 00:45:29.888385 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:45:29.888403 | orchestrator | 2026-04-05 00:45:29.888420 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2026-04-05 00:45:29.888440 | orchestrator | Sunday 05 April 2026 00:45:26 +0000 (0:00:00.152) 0:01:10.686 ********** 2026-04-05 00:45:29.888461 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-01ae77dd-7b74-52e9-8a2e-c19e3ec8ad7a', 'data_vg': 'ceph-01ae77dd-7b74-52e9-8a2e-c19e3ec8ad7a'})  2026-04-05 00:45:29.888563 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-1dbeab33-88c6-544f-8f85-2175dc04d523', 'data_vg': 'ceph-1dbeab33-88c6-544f-8f85-2175dc04d523'})  2026-04-05 00:45:29.888581 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:45:29.888592 | orchestrator | 2026-04-05 00:45:29.888603 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2026-04-05 00:45:29.888614 | orchestrator | Sunday 05 April 2026 00:45:27 +0000 (0:00:00.133) 0:01:10.819 ********** 2026-04-05 00:45:29.888625 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-01ae77dd-7b74-52e9-8a2e-c19e3ec8ad7a', 'data_vg': 'ceph-01ae77dd-7b74-52e9-8a2e-c19e3ec8ad7a'})  2026-04-05 00:45:29.888635 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-1dbeab33-88c6-544f-8f85-2175dc04d523', 'data_vg': 'ceph-1dbeab33-88c6-544f-8f85-2175dc04d523'})  2026-04-05 00:45:29.888646 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:45:29.888657 | orchestrator | 2026-04-05 00:45:29.888669 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2026-04-05 00:45:29.888679 | orchestrator | Sunday 05 April 2026 00:45:27 +0000 (0:00:00.149) 0:01:10.968 ********** 2026-04-05 00:45:29.888690 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-01ae77dd-7b74-52e9-8a2e-c19e3ec8ad7a', 'data_vg': 'ceph-01ae77dd-7b74-52e9-8a2e-c19e3ec8ad7a'})  2026-04-05 00:45:29.888701 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-1dbeab33-88c6-544f-8f85-2175dc04d523', 'data_vg': 'ceph-1dbeab33-88c6-544f-8f85-2175dc04d523'})  2026-04-05 00:45:29.888711 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:45:29.888722 | orchestrator | 2026-04-05 00:45:29.888733 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2026-04-05 00:45:29.888743 | orchestrator | Sunday 05 April 2026 00:45:27 +0000 (0:00:00.152) 0:01:11.121 ********** 2026-04-05 00:45:29.888754 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-01ae77dd-7b74-52e9-8a2e-c19e3ec8ad7a', 'data_vg': 'ceph-01ae77dd-7b74-52e9-8a2e-c19e3ec8ad7a'})  2026-04-05 00:45:29.888764 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-1dbeab33-88c6-544f-8f85-2175dc04d523', 'data_vg': 'ceph-1dbeab33-88c6-544f-8f85-2175dc04d523'})  2026-04-05 00:45:29.888778 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:45:29.888789 | orchestrator | 2026-04-05 00:45:29.888803 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2026-04-05 00:45:29.888823 | orchestrator | Sunday 05 April 2026 00:45:27 +0000 (0:00:00.386) 0:01:11.507 ********** 2026-04-05 00:45:29.888842 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-01ae77dd-7b74-52e9-8a2e-c19e3ec8ad7a', 'data_vg': 'ceph-01ae77dd-7b74-52e9-8a2e-c19e3ec8ad7a'})  2026-04-05 00:45:29.888863 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-1dbeab33-88c6-544f-8f85-2175dc04d523', 'data_vg': 'ceph-1dbeab33-88c6-544f-8f85-2175dc04d523'})  2026-04-05 00:45:29.888882 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:45:29.888942 | orchestrator | 2026-04-05 00:45:29.888964 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2026-04-05 00:45:29.888985 | orchestrator | Sunday 05 April 2026 00:45:27 +0000 (0:00:00.152) 0:01:11.660 ********** 2026-04-05 00:45:29.889004 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:45:29.889022 | orchestrator | 2026-04-05 00:45:29.889036 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2026-04-05 00:45:29.889048 | orchestrator | Sunday 05 April 2026 00:45:28 +0000 (0:00:00.510) 0:01:12.170 ********** 2026-04-05 00:45:29.889059 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:45:29.889070 | orchestrator | 2026-04-05 00:45:29.889080 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2026-04-05 00:45:29.889091 | orchestrator | Sunday 05 April 2026 00:45:28 +0000 (0:00:00.505) 0:01:12.676 ********** 2026-04-05 00:45:29.889101 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:45:29.889112 | orchestrator | 2026-04-05 00:45:29.889123 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2026-04-05 00:45:29.889134 | orchestrator | Sunday 05 April 2026 00:45:29 +0000 (0:00:00.152) 0:01:12.828 ********** 2026-04-05 00:45:29.889144 | orchestrator | ok: [testbed-node-5] => (item={'lv_name': 'osd-block-01ae77dd-7b74-52e9-8a2e-c19e3ec8ad7a', 'vg_name': 'ceph-01ae77dd-7b74-52e9-8a2e-c19e3ec8ad7a'}) 2026-04-05 00:45:29.889156 | orchestrator | ok: [testbed-node-5] => (item={'lv_name': 'osd-block-1dbeab33-88c6-544f-8f85-2175dc04d523', 'vg_name': 'ceph-1dbeab33-88c6-544f-8f85-2175dc04d523'}) 2026-04-05 00:45:29.889167 | orchestrator | 2026-04-05 00:45:29.889177 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2026-04-05 00:45:29.889188 | orchestrator | Sunday 05 April 2026 00:45:29 +0000 (0:00:00.189) 0:01:13.017 ********** 2026-04-05 00:45:29.889218 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-01ae77dd-7b74-52e9-8a2e-c19e3ec8ad7a', 'data_vg': 'ceph-01ae77dd-7b74-52e9-8a2e-c19e3ec8ad7a'})  2026-04-05 00:45:29.889229 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-1dbeab33-88c6-544f-8f85-2175dc04d523', 'data_vg': 'ceph-1dbeab33-88c6-544f-8f85-2175dc04d523'})  2026-04-05 00:45:29.889240 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:45:29.889251 | orchestrator | 2026-04-05 00:45:29.889261 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2026-04-05 00:45:29.889272 | orchestrator | Sunday 05 April 2026 00:45:29 +0000 (0:00:00.144) 0:01:13.162 ********** 2026-04-05 00:45:29.889283 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-01ae77dd-7b74-52e9-8a2e-c19e3ec8ad7a', 'data_vg': 'ceph-01ae77dd-7b74-52e9-8a2e-c19e3ec8ad7a'})  2026-04-05 00:45:29.889293 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-1dbeab33-88c6-544f-8f85-2175dc04d523', 'data_vg': 'ceph-1dbeab33-88c6-544f-8f85-2175dc04d523'})  2026-04-05 00:45:29.889304 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:45:29.889315 | orchestrator | 2026-04-05 00:45:29.889325 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2026-04-05 00:45:29.889335 | orchestrator | Sunday 05 April 2026 00:45:29 +0000 (0:00:00.170) 0:01:13.333 ********** 2026-04-05 00:45:29.889346 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-01ae77dd-7b74-52e9-8a2e-c19e3ec8ad7a', 'data_vg': 'ceph-01ae77dd-7b74-52e9-8a2e-c19e3ec8ad7a'})  2026-04-05 00:45:29.889357 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-1dbeab33-88c6-544f-8f85-2175dc04d523', 'data_vg': 'ceph-1dbeab33-88c6-544f-8f85-2175dc04d523'})  2026-04-05 00:45:29.889367 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:45:29.889378 | orchestrator | 2026-04-05 00:45:29.889388 | orchestrator | TASK [Print LVM report data] *************************************************** 2026-04-05 00:45:29.889399 | orchestrator | Sunday 05 April 2026 00:45:29 +0000 (0:00:00.162) 0:01:13.495 ********** 2026-04-05 00:45:29.889409 | orchestrator | ok: [testbed-node-5] => { 2026-04-05 00:45:29.889420 | orchestrator |  "lvm_report": { 2026-04-05 00:45:29.889432 | orchestrator |  "lv": [ 2026-04-05 00:45:29.889451 | orchestrator |  { 2026-04-05 00:45:29.889462 | orchestrator |  "lv_name": "osd-block-01ae77dd-7b74-52e9-8a2e-c19e3ec8ad7a", 2026-04-05 00:45:29.889474 | orchestrator |  "vg_name": "ceph-01ae77dd-7b74-52e9-8a2e-c19e3ec8ad7a" 2026-04-05 00:45:29.889519 | orchestrator |  }, 2026-04-05 00:45:29.889539 | orchestrator |  { 2026-04-05 00:45:29.889557 | orchestrator |  "lv_name": "osd-block-1dbeab33-88c6-544f-8f85-2175dc04d523", 2026-04-05 00:45:29.889576 | orchestrator |  "vg_name": "ceph-1dbeab33-88c6-544f-8f85-2175dc04d523" 2026-04-05 00:45:29.889587 | orchestrator |  } 2026-04-05 00:45:29.889605 | orchestrator |  ], 2026-04-05 00:45:29.889622 | orchestrator |  "pv": [ 2026-04-05 00:45:29.889650 | orchestrator |  { 2026-04-05 00:45:29.889669 | orchestrator |  "pv_name": "/dev/sdb", 2026-04-05 00:45:29.889688 | orchestrator |  "vg_name": "ceph-01ae77dd-7b74-52e9-8a2e-c19e3ec8ad7a" 2026-04-05 00:45:29.889705 | orchestrator |  }, 2026-04-05 00:45:29.889723 | orchestrator |  { 2026-04-05 00:45:29.889740 | orchestrator |  "pv_name": "/dev/sdc", 2026-04-05 00:45:29.889758 | orchestrator |  "vg_name": "ceph-1dbeab33-88c6-544f-8f85-2175dc04d523" 2026-04-05 00:45:29.889776 | orchestrator |  } 2026-04-05 00:45:29.889795 | orchestrator |  ] 2026-04-05 00:45:29.889812 | orchestrator |  } 2026-04-05 00:45:29.889829 | orchestrator | } 2026-04-05 00:45:29.889847 | orchestrator | 2026-04-05 00:45:29.889864 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-05 00:45:29.889883 | orchestrator | testbed-node-3 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2026-04-05 00:45:29.889902 | orchestrator | testbed-node-4 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2026-04-05 00:45:29.889921 | orchestrator | testbed-node-5 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2026-04-05 00:45:29.889938 | orchestrator | 2026-04-05 00:45:29.889956 | orchestrator | 2026-04-05 00:45:29.889973 | orchestrator | 2026-04-05 00:45:29.890006 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-05 00:45:29.890109 | orchestrator | Sunday 05 April 2026 00:45:29 +0000 (0:00:00.139) 0:01:13.634 ********** 2026-04-05 00:45:29.890130 | orchestrator | =============================================================================== 2026-04-05 00:45:29.890149 | orchestrator | Create block VGs -------------------------------------------------------- 5.51s 2026-04-05 00:45:29.890168 | orchestrator | Create block LVs -------------------------------------------------------- 3.91s 2026-04-05 00:45:29.890185 | orchestrator | Gather DB VGs with total and available size in bytes -------------------- 2.01s 2026-04-05 00:45:29.890204 | orchestrator | Add known partitions to the list of available block devices ------------- 1.65s 2026-04-05 00:45:29.890220 | orchestrator | Gather DB+WAL VGs with total and available size in bytes ---------------- 1.55s 2026-04-05 00:45:29.890237 | orchestrator | Get list of Ceph PVs with associated VGs -------------------------------- 1.55s 2026-04-05 00:45:29.890254 | orchestrator | Get list of Ceph LVs with associated VGs -------------------------------- 1.54s 2026-04-05 00:45:29.890271 | orchestrator | Gather WAL VGs with total and available size in bytes ------------------- 1.53s 2026-04-05 00:45:29.890307 | orchestrator | Add known links to the list of available block devices ------------------ 1.25s 2026-04-05 00:45:30.344651 | orchestrator | Add known partitions to the list of available block devices ------------- 1.11s 2026-04-05 00:45:30.344754 | orchestrator | Print LVM report data --------------------------------------------------- 0.97s 2026-04-05 00:45:30.344769 | orchestrator | Add known links to the list of available block devices ------------------ 0.86s 2026-04-05 00:45:30.344782 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 0.79s 2026-04-05 00:45:30.344793 | orchestrator | Add known links to the list of available block devices ------------------ 0.78s 2026-04-05 00:45:30.344832 | orchestrator | Add known links to the list of available block devices ------------------ 0.76s 2026-04-05 00:45:30.344844 | orchestrator | Fail if DB LV defined in lvm_volumes is missing ------------------------- 0.76s 2026-04-05 00:45:30.344871 | orchestrator | Get initial list of available block devices ----------------------------- 0.75s 2026-04-05 00:45:30.344883 | orchestrator | Add known partitions to the list of available block devices ------------- 0.73s 2026-04-05 00:45:30.344893 | orchestrator | Create DB LVs for ceph_db_wal_devices ----------------------------------- 0.72s 2026-04-05 00:45:30.344904 | orchestrator | Print 'Create DB LVs for ceph_db_devices' ------------------------------- 0.72s 2026-04-05 00:45:41.936415 | orchestrator | 2026-04-05 00:45:41 | INFO  | Prepare task for execution of facts. 2026-04-05 00:45:42.016947 | orchestrator | 2026-04-05 00:45:42 | INFO  | Task ddf4065d-cd21-4c4d-98c7-7b659941bccc (facts) was prepared for execution. 2026-04-05 00:45:42.017079 | orchestrator | 2026-04-05 00:45:42 | INFO  | It takes a moment until task ddf4065d-cd21-4c4d-98c7-7b659941bccc (facts) has been started and output is visible here. 2026-04-05 00:45:53.527577 | orchestrator | 2026-04-05 00:45:53.527691 | orchestrator | PLAY [Apply role facts] ******************************************************** 2026-04-05 00:45:53.527709 | orchestrator | 2026-04-05 00:45:53.527721 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2026-04-05 00:45:53.527733 | orchestrator | Sunday 05 April 2026 00:45:45 +0000 (0:00:00.369) 0:00:00.369 ********** 2026-04-05 00:45:53.527744 | orchestrator | ok: [testbed-manager] 2026-04-05 00:45:53.527756 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:45:53.527768 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:45:53.527779 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:45:53.527789 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:45:53.527800 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:45:53.527810 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:45:53.527821 | orchestrator | 2026-04-05 00:45:53.527832 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2026-04-05 00:45:53.527842 | orchestrator | Sunday 05 April 2026 00:45:46 +0000 (0:00:01.348) 0:00:01.718 ********** 2026-04-05 00:45:53.527853 | orchestrator | skipping: [testbed-manager] 2026-04-05 00:45:53.527865 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:45:53.527875 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:45:53.527886 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:45:53.527896 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:45:53.527907 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:45:53.527918 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:45:53.527928 | orchestrator | 2026-04-05 00:45:53.527939 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-04-05 00:45:53.527950 | orchestrator | 2026-04-05 00:45:53.527961 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-04-05 00:45:53.527971 | orchestrator | Sunday 05 April 2026 00:45:47 +0000 (0:00:01.146) 0:00:02.864 ********** 2026-04-05 00:45:53.527982 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:45:53.527993 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:45:53.528004 | orchestrator | ok: [testbed-manager] 2026-04-05 00:45:53.528014 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:45:53.528025 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:45:53.528036 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:45:53.528046 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:45:53.528057 | orchestrator | 2026-04-05 00:45:53.528068 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2026-04-05 00:45:53.528078 | orchestrator | 2026-04-05 00:45:53.528089 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2026-04-05 00:45:53.528100 | orchestrator | Sunday 05 April 2026 00:45:52 +0000 (0:00:04.615) 0:00:07.480 ********** 2026-04-05 00:45:53.528111 | orchestrator | skipping: [testbed-manager] 2026-04-05 00:45:53.528122 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:45:53.528161 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:45:53.528172 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:45:53.528183 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:45:53.528193 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:45:53.528203 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:45:53.528214 | orchestrator | 2026-04-05 00:45:53.528225 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-05 00:45:53.528236 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-05 00:45:53.528247 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-05 00:45:53.528258 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-05 00:45:53.528268 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-05 00:45:53.528279 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-05 00:45:53.528317 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-05 00:45:53.528329 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-05 00:45:53.528340 | orchestrator | 2026-04-05 00:45:53.528351 | orchestrator | 2026-04-05 00:45:53.528362 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-05 00:45:53.528372 | orchestrator | Sunday 05 April 2026 00:45:53 +0000 (0:00:00.533) 0:00:08.013 ********** 2026-04-05 00:45:53.528383 | orchestrator | =============================================================================== 2026-04-05 00:45:53.528394 | orchestrator | Gathers facts about hosts ----------------------------------------------- 4.62s 2026-04-05 00:45:53.528404 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.35s 2026-04-05 00:45:53.528431 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.15s 2026-04-05 00:45:53.528442 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.53s 2026-04-05 00:46:05.082909 | orchestrator | 2026-04-05 00:46:05 | INFO  | Prepare task for execution of frr. 2026-04-05 00:46:05.163865 | orchestrator | 2026-04-05 00:46:05 | INFO  | Task c3b4a853-cdce-4198-8adf-58c62550248f (frr) was prepared for execution. 2026-04-05 00:46:05.163957 | orchestrator | 2026-04-05 00:46:05 | INFO  | It takes a moment until task c3b4a853-cdce-4198-8adf-58c62550248f (frr) has been started and output is visible here. 2026-04-05 00:46:31.309153 | orchestrator | 2026-04-05 00:46:31.309251 | orchestrator | PLAY [Apply role frr] ********************************************************** 2026-04-05 00:46:31.309265 | orchestrator | 2026-04-05 00:46:31.309274 | orchestrator | TASK [osism.services.frr : Include distribution specific install tasks] ******** 2026-04-05 00:46:31.309284 | orchestrator | Sunday 05 April 2026 00:46:08 +0000 (0:00:00.325) 0:00:00.325 ********** 2026-04-05 00:46:31.309293 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/frr/tasks/install-Debian-family.yml for testbed-manager 2026-04-05 00:46:31.309303 | orchestrator | 2026-04-05 00:46:31.309312 | orchestrator | TASK [osism.services.frr : Pin frr package version] **************************** 2026-04-05 00:46:31.309321 | orchestrator | Sunday 05 April 2026 00:46:08 +0000 (0:00:00.236) 0:00:00.562 ********** 2026-04-05 00:46:31.309329 | orchestrator | changed: [testbed-manager] 2026-04-05 00:46:31.309339 | orchestrator | 2026-04-05 00:46:31.309348 | orchestrator | TASK [osism.services.frr : Install frr package] ******************************** 2026-04-05 00:46:31.309378 | orchestrator | Sunday 05 April 2026 00:46:10 +0000 (0:00:01.546) 0:00:02.109 ********** 2026-04-05 00:46:31.309387 | orchestrator | changed: [testbed-manager] 2026-04-05 00:46:31.309396 | orchestrator | 2026-04-05 00:46:31.309404 | orchestrator | TASK [osism.services.frr : Copy file: /etc/frr/vtysh.conf] ********************* 2026-04-05 00:46:31.309413 | orchestrator | Sunday 05 April 2026 00:46:20 +0000 (0:00:10.247) 0:00:12.356 ********** 2026-04-05 00:46:31.309421 | orchestrator | ok: [testbed-manager] 2026-04-05 00:46:31.309431 | orchestrator | 2026-04-05 00:46:31.309440 | orchestrator | TASK [osism.services.frr : Copy file: /etc/frr/daemons] ************************ 2026-04-05 00:46:31.309448 | orchestrator | Sunday 05 April 2026 00:46:21 +0000 (0:00:01.042) 0:00:13.399 ********** 2026-04-05 00:46:31.309457 | orchestrator | changed: [testbed-manager] 2026-04-05 00:46:31.309465 | orchestrator | 2026-04-05 00:46:31.309474 | orchestrator | TASK [osism.services.frr : Set _frr_uplinks fact] ****************************** 2026-04-05 00:46:31.309483 | orchestrator | Sunday 05 April 2026 00:46:22 +0000 (0:00:01.010) 0:00:14.410 ********** 2026-04-05 00:46:31.309491 | orchestrator | ok: [testbed-manager] 2026-04-05 00:46:31.309533 | orchestrator | 2026-04-05 00:46:31.309542 | orchestrator | TASK [osism.services.frr : Write frr_config_template to temporary file] ******** 2026-04-05 00:46:31.309550 | orchestrator | Sunday 05 April 2026 00:46:24 +0000 (0:00:01.264) 0:00:15.674 ********** 2026-04-05 00:46:31.309559 | orchestrator | skipping: [testbed-manager] 2026-04-05 00:46:31.309567 | orchestrator | 2026-04-05 00:46:31.309576 | orchestrator | TASK [osism.services.frr : Render frr.conf from frr_config_template variable] *** 2026-04-05 00:46:31.309584 | orchestrator | Sunday 05 April 2026 00:46:24 +0000 (0:00:00.145) 0:00:15.819 ********** 2026-04-05 00:46:31.309593 | orchestrator | skipping: [testbed-manager] 2026-04-05 00:46:31.309601 | orchestrator | 2026-04-05 00:46:31.309610 | orchestrator | TASK [osism.services.frr : Remove temporary frr_config_template file] ********** 2026-04-05 00:46:31.309618 | orchestrator | Sunday 05 April 2026 00:46:24 +0000 (0:00:00.301) 0:00:16.121 ********** 2026-04-05 00:46:31.309627 | orchestrator | skipping: [testbed-manager] 2026-04-05 00:46:31.309635 | orchestrator | 2026-04-05 00:46:31.309644 | orchestrator | TASK [osism.services.frr : Check for frr.conf file in the configuration repository] *** 2026-04-05 00:46:31.309653 | orchestrator | Sunday 05 April 2026 00:46:24 +0000 (0:00:00.167) 0:00:16.289 ********** 2026-04-05 00:46:31.309661 | orchestrator | skipping: [testbed-manager] 2026-04-05 00:46:31.309670 | orchestrator | 2026-04-05 00:46:31.309679 | orchestrator | TASK [osism.services.frr : Copy frr.conf file from the configuration repository] *** 2026-04-05 00:46:31.309687 | orchestrator | Sunday 05 April 2026 00:46:24 +0000 (0:00:00.153) 0:00:16.442 ********** 2026-04-05 00:46:31.309696 | orchestrator | skipping: [testbed-manager] 2026-04-05 00:46:31.309706 | orchestrator | 2026-04-05 00:46:31.309716 | orchestrator | TASK [osism.services.frr : Copy default frr.conf file of type k3s_cilium] ****** 2026-04-05 00:46:31.309726 | orchestrator | Sunday 05 April 2026 00:46:24 +0000 (0:00:00.154) 0:00:16.597 ********** 2026-04-05 00:46:31.309736 | orchestrator | changed: [testbed-manager] 2026-04-05 00:46:31.309746 | orchestrator | 2026-04-05 00:46:31.309756 | orchestrator | TASK [osism.services.frr : Set sysctl parameters] ****************************** 2026-04-05 00:46:31.309766 | orchestrator | Sunday 05 April 2026 00:46:25 +0000 (0:00:01.019) 0:00:17.617 ********** 2026-04-05 00:46:31.309775 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.ip_forward', 'value': 1}) 2026-04-05 00:46:31.309785 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.send_redirects', 'value': 0}) 2026-04-05 00:46:31.309797 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.accept_redirects', 'value': 0}) 2026-04-05 00:46:31.309807 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.fib_multipath_hash_policy', 'value': 1}) 2026-04-05 00:46:31.309816 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.default.ignore_routes_with_linkdown', 'value': 1}) 2026-04-05 00:46:31.309827 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.rp_filter', 'value': 2}) 2026-04-05 00:46:31.309857 | orchestrator | 2026-04-05 00:46:31.309928 | orchestrator | TASK [osism.services.frr : Manage frr service] ********************************* 2026-04-05 00:46:31.309952 | orchestrator | Sunday 05 April 2026 00:46:28 +0000 (0:00:02.305) 0:00:19.922 ********** 2026-04-05 00:46:31.309963 | orchestrator | ok: [testbed-manager] 2026-04-05 00:46:31.309974 | orchestrator | 2026-04-05 00:46:31.309983 | orchestrator | RUNNING HANDLER [osism.services.frr : Restart frr service] ********************* 2026-04-05 00:46:31.309993 | orchestrator | Sunday 05 April 2026 00:46:29 +0000 (0:00:01.219) 0:00:21.142 ********** 2026-04-05 00:46:31.310004 | orchestrator | changed: [testbed-manager] 2026-04-05 00:46:31.310073 | orchestrator | 2026-04-05 00:46:31.310085 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-05 00:46:31.310096 | orchestrator | testbed-manager : ok=10  changed=6  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-04-05 00:46:31.310107 | orchestrator | 2026-04-05 00:46:31.310117 | orchestrator | 2026-04-05 00:46:31.310142 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-05 00:46:31.310151 | orchestrator | Sunday 05 April 2026 00:46:30 +0000 (0:00:01.388) 0:00:22.530 ********** 2026-04-05 00:46:31.310159 | orchestrator | =============================================================================== 2026-04-05 00:46:31.310168 | orchestrator | osism.services.frr : Install frr package ------------------------------- 10.25s 2026-04-05 00:46:31.310176 | orchestrator | osism.services.frr : Set sysctl parameters ------------------------------ 2.31s 2026-04-05 00:46:31.310185 | orchestrator | osism.services.frr : Pin frr package version ---------------------------- 1.55s 2026-04-05 00:46:31.310193 | orchestrator | osism.services.frr : Restart frr service -------------------------------- 1.39s 2026-04-05 00:46:31.310201 | orchestrator | osism.services.frr : Set _frr_uplinks fact ------------------------------ 1.26s 2026-04-05 00:46:31.310210 | orchestrator | osism.services.frr : Manage frr service --------------------------------- 1.22s 2026-04-05 00:46:31.310218 | orchestrator | osism.services.frr : Copy file: /etc/frr/vtysh.conf --------------------- 1.04s 2026-04-05 00:46:31.310226 | orchestrator | osism.services.frr : Copy default frr.conf file of type k3s_cilium ------ 1.02s 2026-04-05 00:46:31.310235 | orchestrator | osism.services.frr : Copy file: /etc/frr/daemons ------------------------ 1.01s 2026-04-05 00:46:31.310243 | orchestrator | osism.services.frr : Render frr.conf from frr_config_template variable --- 0.30s 2026-04-05 00:46:31.310251 | orchestrator | osism.services.frr : Include distribution specific install tasks -------- 0.24s 2026-04-05 00:46:31.310260 | orchestrator | osism.services.frr : Remove temporary frr_config_template file ---------- 0.17s 2026-04-05 00:46:31.310268 | orchestrator | osism.services.frr : Copy frr.conf file from the configuration repository --- 0.15s 2026-04-05 00:46:31.310277 | orchestrator | osism.services.frr : Check for frr.conf file in the configuration repository --- 0.15s 2026-04-05 00:46:31.310285 | orchestrator | osism.services.frr : Write frr_config_template to temporary file -------- 0.14s 2026-04-05 00:46:31.493963 | orchestrator | 2026-04-05 00:46:31.495649 | orchestrator | --> DEPLOY IN A NUTSHELL -- START -- Sun Apr 5 00:46:31 UTC 2026 2026-04-05 00:46:31.495688 | orchestrator | 2026-04-05 00:46:32.624747 | orchestrator | 2026-04-05 00:46:32 | INFO  | Collection nutshell is prepared for execution 2026-04-05 00:46:32.744251 | orchestrator | 2026-04-05 00:46:32 | INFO  | A [0] - dotfiles 2026-04-05 00:46:42.820394 | orchestrator | 2026-04-05 00:46:42 | INFO  | A [0] - homer 2026-04-05 00:46:42.820610 | orchestrator | 2026-04-05 00:46:42 | INFO  | A [0] - netdata 2026-04-05 00:46:42.820632 | orchestrator | 2026-04-05 00:46:42 | INFO  | A [0] - openstackclient 2026-04-05 00:46:42.820644 | orchestrator | 2026-04-05 00:46:42 | INFO  | A [0] - phpmyadmin 2026-04-05 00:46:42.820674 | orchestrator | 2026-04-05 00:46:42 | INFO  | A [0] - common 2026-04-05 00:46:42.825699 | orchestrator | 2026-04-05 00:46:42 | INFO  | A [1] -- loadbalancer 2026-04-05 00:46:42.825920 | orchestrator | 2026-04-05 00:46:42 | INFO  | A [2] --- opensearch 2026-04-05 00:46:42.826006 | orchestrator | 2026-04-05 00:46:42 | INFO  | A [2] --- mariadb-ng 2026-04-05 00:46:42.826306 | orchestrator | 2026-04-05 00:46:42 | INFO  | A [3] ---- horizon 2026-04-05 00:46:42.826338 | orchestrator | 2026-04-05 00:46:42 | INFO  | A [3] ---- keystone 2026-04-05 00:46:42.826356 | orchestrator | 2026-04-05 00:46:42 | INFO  | A [4] ----- neutron 2026-04-05 00:46:42.826846 | orchestrator | 2026-04-05 00:46:42 | INFO  | A [5] ------ wait-for-nova 2026-04-05 00:46:42.826899 | orchestrator | 2026-04-05 00:46:42 | INFO  | A [6] ------- octavia 2026-04-05 00:46:42.828818 | orchestrator | 2026-04-05 00:46:42 | INFO  | A [4] ----- barbican 2026-04-05 00:46:42.828863 | orchestrator | 2026-04-05 00:46:42 | INFO  | A [4] ----- designate 2026-04-05 00:46:42.828874 | orchestrator | 2026-04-05 00:46:42 | INFO  | A [4] ----- ironic 2026-04-05 00:46:42.828884 | orchestrator | 2026-04-05 00:46:42 | INFO  | A [4] ----- placement 2026-04-05 00:46:42.829097 | orchestrator | 2026-04-05 00:46:42 | INFO  | A [4] ----- magnum 2026-04-05 00:46:42.831311 | orchestrator | 2026-04-05 00:46:42 | INFO  | A [1] -- openvswitch 2026-04-05 00:46:42.831351 | orchestrator | 2026-04-05 00:46:42 | INFO  | A [2] --- ovn 2026-04-05 00:46:42.831669 | orchestrator | 2026-04-05 00:46:42 | INFO  | A [1] -- memcached 2026-04-05 00:46:42.831967 | orchestrator | 2026-04-05 00:46:42 | INFO  | A [1] -- redis 2026-04-05 00:46:42.832004 | orchestrator | 2026-04-05 00:46:42 | INFO  | A [1] -- rabbitmq-ng 2026-04-05 00:46:42.832552 | orchestrator | 2026-04-05 00:46:42 | INFO  | A [0] - kubernetes 2026-04-05 00:46:42.835960 | orchestrator | 2026-04-05 00:46:42 | INFO  | A [1] -- kubeconfig 2026-04-05 00:46:42.836063 | orchestrator | 2026-04-05 00:46:42 | INFO  | A [1] -- copy-kubeconfig 2026-04-05 00:46:42.836075 | orchestrator | 2026-04-05 00:46:42 | INFO  | A [0] - ceph 2026-04-05 00:46:42.838934 | orchestrator | 2026-04-05 00:46:42 | INFO  | A [1] -- ceph-pools 2026-04-05 00:46:42.839076 | orchestrator | 2026-04-05 00:46:42 | INFO  | A [2] --- copy-ceph-keys 2026-04-05 00:46:42.839092 | orchestrator | 2026-04-05 00:46:42 | INFO  | A [3] ---- cephclient 2026-04-05 00:46:42.839104 | orchestrator | 2026-04-05 00:46:42 | INFO  | A [4] ----- ceph-bootstrap-dashboard 2026-04-05 00:46:42.839126 | orchestrator | 2026-04-05 00:46:42 | INFO  | A [4] ----- wait-for-keystone 2026-04-05 00:46:42.839606 | orchestrator | 2026-04-05 00:46:42 | INFO  | A [5] ------ kolla-ceph-rgw 2026-04-05 00:46:42.839644 | orchestrator | 2026-04-05 00:46:42 | INFO  | A [5] ------ glance 2026-04-05 00:46:42.839664 | orchestrator | 2026-04-05 00:46:42 | INFO  | A [5] ------ cinder 2026-04-05 00:46:42.839681 | orchestrator | 2026-04-05 00:46:42 | INFO  | A [5] ------ nova 2026-04-05 00:46:42.840158 | orchestrator | 2026-04-05 00:46:42 | INFO  | A [4] ----- prometheus 2026-04-05 00:46:42.840188 | orchestrator | 2026-04-05 00:46:42 | INFO  | A [5] ------ grafana 2026-04-05 00:46:43.050991 | orchestrator | 2026-04-05 00:46:43 | INFO  | All tasks of the collection nutshell are prepared for execution 2026-04-05 00:46:43.051086 | orchestrator | 2026-04-05 00:46:43 | INFO  | Tasks are running in the background 2026-04-05 00:46:45.016011 | orchestrator | 2026-04-05 00:46:45 | INFO  | No task IDs specified, wait for all currently running tasks 2026-04-05 00:46:47.229661 | orchestrator | 2026-04-05 00:46:47 | INFO  | Task bcbda999-be68-4903-8396-551ca0f1c657 is in state STARTED 2026-04-05 00:46:47.230152 | orchestrator | 2026-04-05 00:46:47 | INFO  | Task 85405a17-2b31-4b32-bc95-48a27bec92be is in state STARTED 2026-04-05 00:46:47.231095 | orchestrator | 2026-04-05 00:46:47 | INFO  | Task 7fddfc23-0ecf-4242-844b-31339d8bcb7f is in state STARTED 2026-04-05 00:46:47.231562 | orchestrator | 2026-04-05 00:46:47 | INFO  | Task 6dccdfcc-05bb-41c0-9938-f8e4b07aa37f is in state STARTED 2026-04-05 00:46:47.232617 | orchestrator | 2026-04-05 00:46:47 | INFO  | Task 54a07e67-7bc2-4d7b-8db9-789bb5277f1d is in state STARTED 2026-04-05 00:46:47.236138 | orchestrator | 2026-04-05 00:46:47 | INFO  | Task 469f29b3-4249-4c0e-a955-a13bc2fd1c9c is in state STARTED 2026-04-05 00:46:47.239722 | orchestrator | 2026-04-05 00:46:47 | INFO  | Task 2166eeee-bc88-4381-b32b-127d210ba468 is in state STARTED 2026-04-05 00:46:47.239752 | orchestrator | 2026-04-05 00:46:47 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:46:50.278847 | orchestrator | 2026-04-05 00:46:50 | INFO  | Task bcbda999-be68-4903-8396-551ca0f1c657 is in state STARTED 2026-04-05 00:46:50.284121 | orchestrator | 2026-04-05 00:46:50 | INFO  | Task 85405a17-2b31-4b32-bc95-48a27bec92be is in state STARTED 2026-04-05 00:46:50.284662 | orchestrator | 2026-04-05 00:46:50 | INFO  | Task 7fddfc23-0ecf-4242-844b-31339d8bcb7f is in state STARTED 2026-04-05 00:46:50.285431 | orchestrator | 2026-04-05 00:46:50 | INFO  | Task 6dccdfcc-05bb-41c0-9938-f8e4b07aa37f is in state STARTED 2026-04-05 00:46:50.286339 | orchestrator | 2026-04-05 00:46:50 | INFO  | Task 54a07e67-7bc2-4d7b-8db9-789bb5277f1d is in state STARTED 2026-04-05 00:46:50.286866 | orchestrator | 2026-04-05 00:46:50 | INFO  | Task 469f29b3-4249-4c0e-a955-a13bc2fd1c9c is in state STARTED 2026-04-05 00:46:50.287778 | orchestrator | 2026-04-05 00:46:50 | INFO  | Task 2166eeee-bc88-4381-b32b-127d210ba468 is in state STARTED 2026-04-05 00:46:50.287800 | orchestrator | 2026-04-05 00:46:50 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:46:53.391598 | orchestrator | 2026-04-05 00:46:53 | INFO  | Task bcbda999-be68-4903-8396-551ca0f1c657 is in state STARTED 2026-04-05 00:46:53.391919 | orchestrator | 2026-04-05 00:46:53 | INFO  | Task 85405a17-2b31-4b32-bc95-48a27bec92be is in state STARTED 2026-04-05 00:46:53.392547 | orchestrator | 2026-04-05 00:46:53 | INFO  | Task 7fddfc23-0ecf-4242-844b-31339d8bcb7f is in state STARTED 2026-04-05 00:46:53.393160 | orchestrator | 2026-04-05 00:46:53 | INFO  | Task 6dccdfcc-05bb-41c0-9938-f8e4b07aa37f is in state STARTED 2026-04-05 00:46:53.394147 | orchestrator | 2026-04-05 00:46:53 | INFO  | Task 54a07e67-7bc2-4d7b-8db9-789bb5277f1d is in state STARTED 2026-04-05 00:46:53.394618 | orchestrator | 2026-04-05 00:46:53 | INFO  | Task 469f29b3-4249-4c0e-a955-a13bc2fd1c9c is in state STARTED 2026-04-05 00:46:53.395148 | orchestrator | 2026-04-05 00:46:53 | INFO  | Task 2166eeee-bc88-4381-b32b-127d210ba468 is in state STARTED 2026-04-05 00:46:53.395183 | orchestrator | 2026-04-05 00:46:53 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:46:56.656133 | orchestrator | 2026-04-05 00:46:56 | INFO  | Task bcbda999-be68-4903-8396-551ca0f1c657 is in state STARTED 2026-04-05 00:46:56.656240 | orchestrator | 2026-04-05 00:46:56 | INFO  | Task 85405a17-2b31-4b32-bc95-48a27bec92be is in state STARTED 2026-04-05 00:46:56.656267 | orchestrator | 2026-04-05 00:46:56 | INFO  | Task 7fddfc23-0ecf-4242-844b-31339d8bcb7f is in state STARTED 2026-04-05 00:46:56.656286 | orchestrator | 2026-04-05 00:46:56 | INFO  | Task 6dccdfcc-05bb-41c0-9938-f8e4b07aa37f is in state STARTED 2026-04-05 00:46:56.656304 | orchestrator | 2026-04-05 00:46:56 | INFO  | Task 54a07e67-7bc2-4d7b-8db9-789bb5277f1d is in state STARTED 2026-04-05 00:46:56.656320 | orchestrator | 2026-04-05 00:46:56 | INFO  | Task 469f29b3-4249-4c0e-a955-a13bc2fd1c9c is in state STARTED 2026-04-05 00:46:56.656366 | orchestrator | 2026-04-05 00:46:56 | INFO  | Task 2166eeee-bc88-4381-b32b-127d210ba468 is in state STARTED 2026-04-05 00:46:56.656385 | orchestrator | 2026-04-05 00:46:56 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:46:59.570880 | orchestrator | 2026-04-05 00:46:59 | INFO  | Task bcbda999-be68-4903-8396-551ca0f1c657 is in state STARTED 2026-04-05 00:46:59.571990 | orchestrator | 2026-04-05 00:46:59 | INFO  | Task 85405a17-2b31-4b32-bc95-48a27bec92be is in state STARTED 2026-04-05 00:46:59.572876 | orchestrator | 2026-04-05 00:46:59 | INFO  | Task 7fddfc23-0ecf-4242-844b-31339d8bcb7f is in state STARTED 2026-04-05 00:46:59.573809 | orchestrator | 2026-04-05 00:46:59 | INFO  | Task 6dccdfcc-05bb-41c0-9938-f8e4b07aa37f is in state STARTED 2026-04-05 00:46:59.574452 | orchestrator | 2026-04-05 00:46:59 | INFO  | Task 54a07e67-7bc2-4d7b-8db9-789bb5277f1d is in state STARTED 2026-04-05 00:46:59.575040 | orchestrator | 2026-04-05 00:46:59 | INFO  | Task 469f29b3-4249-4c0e-a955-a13bc2fd1c9c is in state STARTED 2026-04-05 00:46:59.575612 | orchestrator | 2026-04-05 00:46:59 | INFO  | Task 2166eeee-bc88-4381-b32b-127d210ba468 is in state STARTED 2026-04-05 00:46:59.575678 | orchestrator | 2026-04-05 00:46:59 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:47:02.791781 | orchestrator | 2026-04-05 00:47:02 | INFO  | Task bcbda999-be68-4903-8396-551ca0f1c657 is in state STARTED 2026-04-05 00:47:02.791865 | orchestrator | 2026-04-05 00:47:02 | INFO  | Task 85405a17-2b31-4b32-bc95-48a27bec92be is in state STARTED 2026-04-05 00:47:02.791877 | orchestrator | 2026-04-05 00:47:02 | INFO  | Task 7fddfc23-0ecf-4242-844b-31339d8bcb7f is in state STARTED 2026-04-05 00:47:02.791888 | orchestrator | 2026-04-05 00:47:02 | INFO  | Task 6dccdfcc-05bb-41c0-9938-f8e4b07aa37f is in state STARTED 2026-04-05 00:47:02.791897 | orchestrator | 2026-04-05 00:47:02 | INFO  | Task 54a07e67-7bc2-4d7b-8db9-789bb5277f1d is in state STARTED 2026-04-05 00:47:02.791907 | orchestrator | 2026-04-05 00:47:02 | INFO  | Task 469f29b3-4249-4c0e-a955-a13bc2fd1c9c is in state STARTED 2026-04-05 00:47:02.791917 | orchestrator | 2026-04-05 00:47:02 | INFO  | Task 2166eeee-bc88-4381-b32b-127d210ba468 is in state STARTED 2026-04-05 00:47:02.791927 | orchestrator | 2026-04-05 00:47:02 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:47:05.693605 | orchestrator | 2026-04-05 00:47:05 | INFO  | Task bcbda999-be68-4903-8396-551ca0f1c657 is in state STARTED 2026-04-05 00:47:05.723410 | orchestrator | 2026-04-05 00:47:05 | INFO  | Task 85405a17-2b31-4b32-bc95-48a27bec92be is in state STARTED 2026-04-05 00:47:05.723490 | orchestrator | 2026-04-05 00:47:05 | INFO  | Task 7fddfc23-0ecf-4242-844b-31339d8bcb7f is in state STARTED 2026-04-05 00:47:05.723531 | orchestrator | 2026-04-05 00:47:05 | INFO  | Task 6dccdfcc-05bb-41c0-9938-f8e4b07aa37f is in state STARTED 2026-04-05 00:47:05.723543 | orchestrator | 2026-04-05 00:47:05 | INFO  | Task 54a07e67-7bc2-4d7b-8db9-789bb5277f1d is in state STARTED 2026-04-05 00:47:05.723554 | orchestrator | 2026-04-05 00:47:05 | INFO  | Task 469f29b3-4249-4c0e-a955-a13bc2fd1c9c is in state STARTED 2026-04-05 00:47:05.723566 | orchestrator | 2026-04-05 00:47:05 | INFO  | Task 2166eeee-bc88-4381-b32b-127d210ba468 is in state STARTED 2026-04-05 00:47:05.723577 | orchestrator | 2026-04-05 00:47:05 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:47:08.817275 | orchestrator | 2026-04-05 00:47:08 | INFO  | Task bcbda999-be68-4903-8396-551ca0f1c657 is in state STARTED 2026-04-05 00:47:08.818663 | orchestrator | 2026-04-05 00:47:08 | INFO  | Task 85405a17-2b31-4b32-bc95-48a27bec92be is in state STARTED 2026-04-05 00:47:08.818715 | orchestrator | 2026-04-05 00:47:08 | INFO  | Task 7fddfc23-0ecf-4242-844b-31339d8bcb7f is in state STARTED 2026-04-05 00:47:08.818725 | orchestrator | 2026-04-05 00:47:08 | INFO  | Task 6dccdfcc-05bb-41c0-9938-f8e4b07aa37f is in state STARTED 2026-04-05 00:47:08.820776 | orchestrator | 2026-04-05 00:47:08 | INFO  | Task 54a07e67-7bc2-4d7b-8db9-789bb5277f1d is in state STARTED 2026-04-05 00:47:08.820847 | orchestrator | 2026-04-05 00:47:08 | INFO  | Task 469f29b3-4249-4c0e-a955-a13bc2fd1c9c is in state STARTED 2026-04-05 00:47:08.820862 | orchestrator | 2026-04-05 00:47:08 | INFO  | Task 2166eeee-bc88-4381-b32b-127d210ba468 is in state STARTED 2026-04-05 00:47:08.820874 | orchestrator | 2026-04-05 00:47:08 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:47:11.937997 | orchestrator | 2026-04-05 00:47:11 | INFO  | Task bcbda999-be68-4903-8396-551ca0f1c657 is in state STARTED 2026-04-05 00:47:11.938189 | orchestrator | 2026-04-05 00:47:11 | INFO  | Task 85405a17-2b31-4b32-bc95-48a27bec92be is in state STARTED 2026-04-05 00:47:11.938209 | orchestrator | 2026-04-05 00:47:11 | INFO  | Task 7fddfc23-0ecf-4242-844b-31339d8bcb7f is in state STARTED 2026-04-05 00:47:11.938220 | orchestrator | 2026-04-05 00:47:11 | INFO  | Task 6dccdfcc-05bb-41c0-9938-f8e4b07aa37f is in state STARTED 2026-04-05 00:47:11.938231 | orchestrator | 2026-04-05 00:47:11 | INFO  | Task 54a07e67-7bc2-4d7b-8db9-789bb5277f1d is in state STARTED 2026-04-05 00:47:11.939362 | orchestrator | 2026-04-05 00:47:11 | INFO  | Task 4ffece65-dcd0-49c8-a564-a711c224a09b is in state STARTED 2026-04-05 00:47:11.945752 | orchestrator | 2026-04-05 00:47:11 | INFO  | Task 469f29b3-4249-4c0e-a955-a13bc2fd1c9c is in state STARTED 2026-04-05 00:47:11.951786 | orchestrator | 2026-04-05 00:47:11 | INFO  | Task 2166eeee-bc88-4381-b32b-127d210ba468 is in state SUCCESS 2026-04-05 00:47:11.952625 | orchestrator | 2026-04-05 00:47:11.952671 | orchestrator | PLAY [Apply role geerlingguy.dotfiles] ***************************************** 2026-04-05 00:47:11.952690 | orchestrator | 2026-04-05 00:47:11.952707 | orchestrator | TASK [geerlingguy.dotfiles : Ensure dotfiles repository is cloned locally.] **** 2026-04-05 00:47:11.952724 | orchestrator | Sunday 05 April 2026 00:46:53 +0000 (0:00:01.112) 0:00:01.112 ********** 2026-04-05 00:47:11.952740 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:47:11.952758 | orchestrator | changed: [testbed-manager] 2026-04-05 00:47:11.952775 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:47:11.952791 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:47:11.952805 | orchestrator | changed: [testbed-node-4] 2026-04-05 00:47:11.952815 | orchestrator | changed: [testbed-node-3] 2026-04-05 00:47:11.952825 | orchestrator | changed: [testbed-node-5] 2026-04-05 00:47:11.952834 | orchestrator | 2026-04-05 00:47:11.952844 | orchestrator | TASK [geerlingguy.dotfiles : Ensure all configured dotfiles are links.] ******** 2026-04-05 00:47:11.952853 | orchestrator | Sunday 05 April 2026 00:46:58 +0000 (0:00:04.958) 0:00:06.071 ********** 2026-04-05 00:47:11.952863 | orchestrator | ok: [testbed-node-0] => (item=.tmux.conf) 2026-04-05 00:47:11.952873 | orchestrator | ok: [testbed-manager] => (item=.tmux.conf) 2026-04-05 00:47:11.952883 | orchestrator | ok: [testbed-node-1] => (item=.tmux.conf) 2026-04-05 00:47:11.952892 | orchestrator | ok: [testbed-node-2] => (item=.tmux.conf) 2026-04-05 00:47:11.952902 | orchestrator | ok: [testbed-node-3] => (item=.tmux.conf) 2026-04-05 00:47:11.952911 | orchestrator | ok: [testbed-node-4] => (item=.tmux.conf) 2026-04-05 00:47:11.952920 | orchestrator | ok: [testbed-node-5] => (item=.tmux.conf) 2026-04-05 00:47:11.952930 | orchestrator | 2026-04-05 00:47:11.952939 | orchestrator | TASK [geerlingguy.dotfiles : Remove existing dotfiles file if a replacement is being linked.] *** 2026-04-05 00:47:11.952949 | orchestrator | Sunday 05 April 2026 00:47:01 +0000 (0:00:03.194) 0:00:09.265 ********** 2026-04-05 00:47:11.952964 | orchestrator | ok: [testbed-node-0] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-04-05 00:46:59.670771', 'end': '2026-04-05 00:46:59.679746', 'delta': '0:00:00.008975', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-04-05 00:47:11.953019 | orchestrator | ok: [testbed-node-1] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-04-05 00:46:59.788789', 'end': '2026-04-05 00:46:59.798354', 'delta': '0:00:00.009565', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-04-05 00:47:11.953032 | orchestrator | ok: [testbed-node-2] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-04-05 00:46:59.947560', 'end': '2026-04-05 00:46:59.955290', 'delta': '0:00:00.007730', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-04-05 00:47:11.953066 | orchestrator | ok: [testbed-node-3] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-04-05 00:47:00.190688', 'end': '2026-04-05 00:47:00.199350', 'delta': '0:00:00.008662', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-04-05 00:47:11.953077 | orchestrator | ok: [testbed-node-4] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-04-05 00:47:00.824641', 'end': '2026-04-05 00:47:00.832136', 'delta': '0:00:00.007495', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-04-05 00:47:11.953095 | orchestrator | ok: [testbed-node-5] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-04-05 00:47:01.066543', 'end': '2026-04-05 00:47:01.074742', 'delta': '0:00:00.008199', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-04-05 00:47:11.953111 | orchestrator | ok: [testbed-manager] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-04-05 00:46:59.731406', 'end': '2026-04-05 00:46:59.736154', 'delta': '0:00:00.004748', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-04-05 00:47:11.953121 | orchestrator | 2026-04-05 00:47:11.953131 | orchestrator | TASK [geerlingguy.dotfiles : Ensure parent folders of link dotfiles exist.] **** 2026-04-05 00:47:11.953141 | orchestrator | Sunday 05 April 2026 00:47:03 +0000 (0:00:01.608) 0:00:10.874 ********** 2026-04-05 00:47:11.953151 | orchestrator | ok: [testbed-node-0] => (item=.tmux.conf) 2026-04-05 00:47:11.953160 | orchestrator | ok: [testbed-manager] => (item=.tmux.conf) 2026-04-05 00:47:11.953170 | orchestrator | ok: [testbed-node-1] => (item=.tmux.conf) 2026-04-05 00:47:11.953179 | orchestrator | ok: [testbed-node-2] => (item=.tmux.conf) 2026-04-05 00:47:11.953233 | orchestrator | ok: [testbed-node-3] => (item=.tmux.conf) 2026-04-05 00:47:11.953245 | orchestrator | ok: [testbed-node-4] => (item=.tmux.conf) 2026-04-05 00:47:11.953256 | orchestrator | ok: [testbed-node-5] => (item=.tmux.conf) 2026-04-05 00:47:11.953267 | orchestrator | 2026-04-05 00:47:11.953279 | orchestrator | TASK [geerlingguy.dotfiles : Link dotfiles into home folder.] ****************** 2026-04-05 00:47:11.953291 | orchestrator | Sunday 05 April 2026 00:47:06 +0000 (0:00:02.745) 0:00:13.620 ********** 2026-04-05 00:47:11.953302 | orchestrator | changed: [testbed-manager] => (item=.tmux.conf) 2026-04-05 00:47:11.953314 | orchestrator | changed: [testbed-node-0] => (item=.tmux.conf) 2026-04-05 00:47:11.953326 | orchestrator | changed: [testbed-node-1] => (item=.tmux.conf) 2026-04-05 00:47:11.953337 | orchestrator | changed: [testbed-node-2] => (item=.tmux.conf) 2026-04-05 00:47:11.953348 | orchestrator | changed: [testbed-node-3] => (item=.tmux.conf) 2026-04-05 00:47:11.953359 | orchestrator | changed: [testbed-node-4] => (item=.tmux.conf) 2026-04-05 00:47:11.953371 | orchestrator | changed: [testbed-node-5] => (item=.tmux.conf) 2026-04-05 00:47:11.953382 | orchestrator | 2026-04-05 00:47:11.953394 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-05 00:47:11.953413 | orchestrator | testbed-manager : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-05 00:47:11.953427 | orchestrator | testbed-node-0 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-05 00:47:11.953438 | orchestrator | testbed-node-1 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-05 00:47:11.953457 | orchestrator | testbed-node-2 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-05 00:47:11.953468 | orchestrator | testbed-node-3 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-05 00:47:11.953480 | orchestrator | testbed-node-4 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-05 00:47:11.953491 | orchestrator | testbed-node-5 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-05 00:47:11.953558 | orchestrator | 2026-04-05 00:47:11.953570 | orchestrator | 2026-04-05 00:47:11.953581 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-05 00:47:11.953590 | orchestrator | Sunday 05 April 2026 00:47:09 +0000 (0:00:03.065) 0:00:16.685 ********** 2026-04-05 00:47:11.953600 | orchestrator | =============================================================================== 2026-04-05 00:47:11.953610 | orchestrator | geerlingguy.dotfiles : Ensure dotfiles repository is cloned locally. ---- 4.96s 2026-04-05 00:47:11.953619 | orchestrator | geerlingguy.dotfiles : Ensure all configured dotfiles are links. -------- 3.19s 2026-04-05 00:47:11.953629 | orchestrator | geerlingguy.dotfiles : Link dotfiles into home folder. ------------------ 3.07s 2026-04-05 00:47:11.953638 | orchestrator | geerlingguy.dotfiles : Ensure parent folders of link dotfiles exist. ---- 2.75s 2026-04-05 00:47:11.953648 | orchestrator | geerlingguy.dotfiles : Remove existing dotfiles file if a replacement is being linked. --- 1.61s 2026-04-05 00:47:11.953658 | orchestrator | 2026-04-05 00:47:11 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:47:15.156939 | orchestrator | 2026-04-05 00:47:15 | INFO  | Task bcbda999-be68-4903-8396-551ca0f1c657 is in state STARTED 2026-04-05 00:47:15.157014 | orchestrator | 2026-04-05 00:47:15 | INFO  | Task 85405a17-2b31-4b32-bc95-48a27bec92be is in state STARTED 2026-04-05 00:47:15.157021 | orchestrator | 2026-04-05 00:47:15 | INFO  | Task 7fddfc23-0ecf-4242-844b-31339d8bcb7f is in state STARTED 2026-04-05 00:47:15.157027 | orchestrator | 2026-04-05 00:47:15 | INFO  | Task 6dccdfcc-05bb-41c0-9938-f8e4b07aa37f is in state STARTED 2026-04-05 00:47:15.157067 | orchestrator | 2026-04-05 00:47:15 | INFO  | Task 54a07e67-7bc2-4d7b-8db9-789bb5277f1d is in state STARTED 2026-04-05 00:47:15.157073 | orchestrator | 2026-04-05 00:47:15 | INFO  | Task 4ffece65-dcd0-49c8-a564-a711c224a09b is in state STARTED 2026-04-05 00:47:15.157078 | orchestrator | 2026-04-05 00:47:15 | INFO  | Task 469f29b3-4249-4c0e-a955-a13bc2fd1c9c is in state STARTED 2026-04-05 00:47:15.157083 | orchestrator | 2026-04-05 00:47:15 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:47:18.243459 | orchestrator | 2026-04-05 00:47:18 | INFO  | Task bcbda999-be68-4903-8396-551ca0f1c657 is in state STARTED 2026-04-05 00:47:18.243762 | orchestrator | 2026-04-05 00:47:18 | INFO  | Task 85405a17-2b31-4b32-bc95-48a27bec92be is in state STARTED 2026-04-05 00:47:18.244583 | orchestrator | 2026-04-05 00:47:18 | INFO  | Task 7fddfc23-0ecf-4242-844b-31339d8bcb7f is in state STARTED 2026-04-05 00:47:18.245630 | orchestrator | 2026-04-05 00:47:18 | INFO  | Task 6dccdfcc-05bb-41c0-9938-f8e4b07aa37f is in state STARTED 2026-04-05 00:47:18.246370 | orchestrator | 2026-04-05 00:47:18 | INFO  | Task 54a07e67-7bc2-4d7b-8db9-789bb5277f1d is in state STARTED 2026-04-05 00:47:18.247222 | orchestrator | 2026-04-05 00:47:18 | INFO  | Task 4ffece65-dcd0-49c8-a564-a711c224a09b is in state STARTED 2026-04-05 00:47:18.248243 | orchestrator | 2026-04-05 00:47:18 | INFO  | Task 469f29b3-4249-4c0e-a955-a13bc2fd1c9c is in state STARTED 2026-04-05 00:47:18.248277 | orchestrator | 2026-04-05 00:47:18 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:47:21.412946 | orchestrator | 2026-04-05 00:47:21 | INFO  | Task bcbda999-be68-4903-8396-551ca0f1c657 is in state STARTED 2026-04-05 00:47:21.413407 | orchestrator | 2026-04-05 00:47:21 | INFO  | Task 85405a17-2b31-4b32-bc95-48a27bec92be is in state STARTED 2026-04-05 00:47:21.414452 | orchestrator | 2026-04-05 00:47:21 | INFO  | Task 7fddfc23-0ecf-4242-844b-31339d8bcb7f is in state STARTED 2026-04-05 00:47:21.415855 | orchestrator | 2026-04-05 00:47:21 | INFO  | Task 6dccdfcc-05bb-41c0-9938-f8e4b07aa37f is in state STARTED 2026-04-05 00:47:21.416991 | orchestrator | 2026-04-05 00:47:21 | INFO  | Task 54a07e67-7bc2-4d7b-8db9-789bb5277f1d is in state STARTED 2026-04-05 00:47:21.420692 | orchestrator | 2026-04-05 00:47:21 | INFO  | Task 4ffece65-dcd0-49c8-a564-a711c224a09b is in state STARTED 2026-04-05 00:47:21.420794 | orchestrator | 2026-04-05 00:47:21 | INFO  | Task 469f29b3-4249-4c0e-a955-a13bc2fd1c9c is in state STARTED 2026-04-05 00:47:21.420811 | orchestrator | 2026-04-05 00:47:21 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:47:24.494761 | orchestrator | 2026-04-05 00:47:24 | INFO  | Task bcbda999-be68-4903-8396-551ca0f1c657 is in state STARTED 2026-04-05 00:47:24.495083 | orchestrator | 2026-04-05 00:47:24 | INFO  | Task 85405a17-2b31-4b32-bc95-48a27bec92be is in state STARTED 2026-04-05 00:47:24.495968 | orchestrator | 2026-04-05 00:47:24 | INFO  | Task 7fddfc23-0ecf-4242-844b-31339d8bcb7f is in state STARTED 2026-04-05 00:47:24.496627 | orchestrator | 2026-04-05 00:47:24 | INFO  | Task 6dccdfcc-05bb-41c0-9938-f8e4b07aa37f is in state STARTED 2026-04-05 00:47:24.499877 | orchestrator | 2026-04-05 00:47:24 | INFO  | Task 54a07e67-7bc2-4d7b-8db9-789bb5277f1d is in state STARTED 2026-04-05 00:47:24.500456 | orchestrator | 2026-04-05 00:47:24 | INFO  | Task 4ffece65-dcd0-49c8-a564-a711c224a09b is in state STARTED 2026-04-05 00:47:24.501151 | orchestrator | 2026-04-05 00:47:24 | INFO  | Task 469f29b3-4249-4c0e-a955-a13bc2fd1c9c is in state STARTED 2026-04-05 00:47:24.501359 | orchestrator | 2026-04-05 00:47:24 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:47:27.627384 | orchestrator | 2026-04-05 00:47:27 | INFO  | Task bcbda999-be68-4903-8396-551ca0f1c657 is in state STARTED 2026-04-05 00:47:27.628989 | orchestrator | 2026-04-05 00:47:27 | INFO  | Task 85405a17-2b31-4b32-bc95-48a27bec92be is in state STARTED 2026-04-05 00:47:27.630323 | orchestrator | 2026-04-05 00:47:27 | INFO  | Task 7fddfc23-0ecf-4242-844b-31339d8bcb7f is in state STARTED 2026-04-05 00:47:27.630350 | orchestrator | 2026-04-05 00:47:27 | INFO  | Task 6dccdfcc-05bb-41c0-9938-f8e4b07aa37f is in state STARTED 2026-04-05 00:47:27.631941 | orchestrator | 2026-04-05 00:47:27 | INFO  | Task 54a07e67-7bc2-4d7b-8db9-789bb5277f1d is in state STARTED 2026-04-05 00:47:27.633331 | orchestrator | 2026-04-05 00:47:27 | INFO  | Task 4ffece65-dcd0-49c8-a564-a711c224a09b is in state STARTED 2026-04-05 00:47:27.635222 | orchestrator | 2026-04-05 00:47:27 | INFO  | Task 469f29b3-4249-4c0e-a955-a13bc2fd1c9c is in state STARTED 2026-04-05 00:47:27.635278 | orchestrator | 2026-04-05 00:47:27 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:47:30.738978 | orchestrator | 2026-04-05 00:47:30 | INFO  | Task bcbda999-be68-4903-8396-551ca0f1c657 is in state STARTED 2026-04-05 00:47:30.739067 | orchestrator | 2026-04-05 00:47:30 | INFO  | Task 85405a17-2b31-4b32-bc95-48a27bec92be is in state STARTED 2026-04-05 00:47:30.739080 | orchestrator | 2026-04-05 00:47:30 | INFO  | Task 7fddfc23-0ecf-4242-844b-31339d8bcb7f is in state STARTED 2026-04-05 00:47:30.739648 | orchestrator | 2026-04-05 00:47:30 | INFO  | Task 6dccdfcc-05bb-41c0-9938-f8e4b07aa37f is in state STARTED 2026-04-05 00:47:30.741162 | orchestrator | 2026-04-05 00:47:30 | INFO  | Task 54a07e67-7bc2-4d7b-8db9-789bb5277f1d is in state STARTED 2026-04-05 00:47:30.743821 | orchestrator | 2026-04-05 00:47:30 | INFO  | Task 4ffece65-dcd0-49c8-a564-a711c224a09b is in state STARTED 2026-04-05 00:47:30.750181 | orchestrator | 2026-04-05 00:47:30 | INFO  | Task 469f29b3-4249-4c0e-a955-a13bc2fd1c9c is in state STARTED 2026-04-05 00:47:30.750240 | orchestrator | 2026-04-05 00:47:30 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:47:33.898250 | orchestrator | 2026-04-05 00:47:33 | INFO  | Task bcbda999-be68-4903-8396-551ca0f1c657 is in state STARTED 2026-04-05 00:47:33.898395 | orchestrator | 2026-04-05 00:47:33 | INFO  | Task 85405a17-2b31-4b32-bc95-48a27bec92be is in state STARTED 2026-04-05 00:47:33.898422 | orchestrator | 2026-04-05 00:47:33 | INFO  | Task 7fddfc23-0ecf-4242-844b-31339d8bcb7f is in state STARTED 2026-04-05 00:47:33.898443 | orchestrator | 2026-04-05 00:47:33 | INFO  | Task 6dccdfcc-05bb-41c0-9938-f8e4b07aa37f is in state STARTED 2026-04-05 00:47:33.898462 | orchestrator | 2026-04-05 00:47:33 | INFO  | Task 54a07e67-7bc2-4d7b-8db9-789bb5277f1d is in state SUCCESS 2026-04-05 00:47:33.898482 | orchestrator | 2026-04-05 00:47:33 | INFO  | Task 4ffece65-dcd0-49c8-a564-a711c224a09b is in state STARTED 2026-04-05 00:47:33.898536 | orchestrator | 2026-04-05 00:47:33 | INFO  | Task 469f29b3-4249-4c0e-a955-a13bc2fd1c9c is in state STARTED 2026-04-05 00:47:33.898559 | orchestrator | 2026-04-05 00:47:33 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:47:37.010755 | orchestrator | 2026-04-05 00:47:36 | INFO  | Task bcbda999-be68-4903-8396-551ca0f1c657 is in state STARTED 2026-04-05 00:47:37.010833 | orchestrator | 2026-04-05 00:47:36 | INFO  | Task 85405a17-2b31-4b32-bc95-48a27bec92be is in state STARTED 2026-04-05 00:47:37.010846 | orchestrator | 2026-04-05 00:47:36 | INFO  | Task 7fddfc23-0ecf-4242-844b-31339d8bcb7f is in state STARTED 2026-04-05 00:47:37.010857 | orchestrator | 2026-04-05 00:47:36 | INFO  | Task 6dccdfcc-05bb-41c0-9938-f8e4b07aa37f is in state STARTED 2026-04-05 00:47:37.010867 | orchestrator | 2026-04-05 00:47:36 | INFO  | Task 4ffece65-dcd0-49c8-a564-a711c224a09b is in state STARTED 2026-04-05 00:47:37.010876 | orchestrator | 2026-04-05 00:47:36 | INFO  | Task 469f29b3-4249-4c0e-a955-a13bc2fd1c9c is in state STARTED 2026-04-05 00:47:37.010886 | orchestrator | 2026-04-05 00:47:36 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:47:40.314199 | orchestrator | 2026-04-05 00:47:39 | INFO  | Task bcbda999-be68-4903-8396-551ca0f1c657 is in state STARTED 2026-04-05 00:47:40.314297 | orchestrator | 2026-04-05 00:47:39 | INFO  | Task 85405a17-2b31-4b32-bc95-48a27bec92be is in state STARTED 2026-04-05 00:47:40.314687 | orchestrator | 2026-04-05 00:47:39 | INFO  | Task 7fddfc23-0ecf-4242-844b-31339d8bcb7f is in state STARTED 2026-04-05 00:47:40.314711 | orchestrator | 2026-04-05 00:47:40 | INFO  | Task 6dccdfcc-05bb-41c0-9938-f8e4b07aa37f is in state STARTED 2026-04-05 00:47:40.314723 | orchestrator | 2026-04-05 00:47:40 | INFO  | Task 4ffece65-dcd0-49c8-a564-a711c224a09b is in state STARTED 2026-04-05 00:47:40.314736 | orchestrator | 2026-04-05 00:47:40 | INFO  | Task 469f29b3-4249-4c0e-a955-a13bc2fd1c9c is in state STARTED 2026-04-05 00:47:40.314750 | orchestrator | 2026-04-05 00:47:40 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:47:43.134574 | orchestrator | 2026-04-05 00:47:43 | INFO  | Task bcbda999-be68-4903-8396-551ca0f1c657 is in state STARTED 2026-04-05 00:47:43.136414 | orchestrator | 2026-04-05 00:47:43 | INFO  | Task 85405a17-2b31-4b32-bc95-48a27bec92be is in state STARTED 2026-04-05 00:47:43.137458 | orchestrator | 2026-04-05 00:47:43 | INFO  | Task 7fddfc23-0ecf-4242-844b-31339d8bcb7f is in state STARTED 2026-04-05 00:47:43.138458 | orchestrator | 2026-04-05 00:47:43 | INFO  | Task 6dccdfcc-05bb-41c0-9938-f8e4b07aa37f is in state STARTED 2026-04-05 00:47:43.139482 | orchestrator | 2026-04-05 00:47:43 | INFO  | Task 4ffece65-dcd0-49c8-a564-a711c224a09b is in state STARTED 2026-04-05 00:47:43.140374 | orchestrator | 2026-04-05 00:47:43 | INFO  | Task 469f29b3-4249-4c0e-a955-a13bc2fd1c9c is in state STARTED 2026-04-05 00:47:43.140395 | orchestrator | 2026-04-05 00:47:43 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:47:46.355281 | orchestrator | 2026-04-05 00:47:46 | INFO  | Task bcbda999-be68-4903-8396-551ca0f1c657 is in state STARTED 2026-04-05 00:47:46.355401 | orchestrator | 2026-04-05 00:47:46 | INFO  | Task 85405a17-2b31-4b32-bc95-48a27bec92be is in state STARTED 2026-04-05 00:47:46.355417 | orchestrator | 2026-04-05 00:47:46 | INFO  | Task 7fddfc23-0ecf-4242-844b-31339d8bcb7f is in state STARTED 2026-04-05 00:47:46.355428 | orchestrator | 2026-04-05 00:47:46 | INFO  | Task 6dccdfcc-05bb-41c0-9938-f8e4b07aa37f is in state STARTED 2026-04-05 00:47:46.355439 | orchestrator | 2026-04-05 00:47:46 | INFO  | Task 4ffece65-dcd0-49c8-a564-a711c224a09b is in state STARTED 2026-04-05 00:47:46.355450 | orchestrator | 2026-04-05 00:47:46 | INFO  | Task 469f29b3-4249-4c0e-a955-a13bc2fd1c9c is in state STARTED 2026-04-05 00:47:46.355461 | orchestrator | 2026-04-05 00:47:46 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:47:49.290993 | orchestrator | 2026-04-05 00:47:49 | INFO  | Task bcbda999-be68-4903-8396-551ca0f1c657 is in state STARTED 2026-04-05 00:47:49.291542 | orchestrator | 2026-04-05 00:47:49 | INFO  | Task 85405a17-2b31-4b32-bc95-48a27bec92be is in state SUCCESS 2026-04-05 00:47:49.303960 | orchestrator | 2026-04-05 00:47:49 | INFO  | Task 7fddfc23-0ecf-4242-844b-31339d8bcb7f is in state STARTED 2026-04-05 00:47:49.311857 | orchestrator | 2026-04-05 00:47:49 | INFO  | Task 6dccdfcc-05bb-41c0-9938-f8e4b07aa37f is in state STARTED 2026-04-05 00:47:49.315032 | orchestrator | 2026-04-05 00:47:49 | INFO  | Task 4ffece65-dcd0-49c8-a564-a711c224a09b is in state STARTED 2026-04-05 00:47:49.317396 | orchestrator | 2026-04-05 00:47:49 | INFO  | Task 469f29b3-4249-4c0e-a955-a13bc2fd1c9c is in state STARTED 2026-04-05 00:47:49.317444 | orchestrator | 2026-04-05 00:47:49 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:47:52.371997 | orchestrator | 2026-04-05 00:47:52 | INFO  | Task bcbda999-be68-4903-8396-551ca0f1c657 is in state STARTED 2026-04-05 00:47:52.373670 | orchestrator | 2026-04-05 00:47:52 | INFO  | Task 7fddfc23-0ecf-4242-844b-31339d8bcb7f is in state STARTED 2026-04-05 00:47:52.374632 | orchestrator | 2026-04-05 00:47:52 | INFO  | Task 6dccdfcc-05bb-41c0-9938-f8e4b07aa37f is in state STARTED 2026-04-05 00:47:52.378564 | orchestrator | 2026-04-05 00:47:52 | INFO  | Task 4ffece65-dcd0-49c8-a564-a711c224a09b is in state STARTED 2026-04-05 00:47:52.379893 | orchestrator | 2026-04-05 00:47:52 | INFO  | Task 469f29b3-4249-4c0e-a955-a13bc2fd1c9c is in state STARTED 2026-04-05 00:47:52.380337 | orchestrator | 2026-04-05 00:47:52 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:47:55.476861 | orchestrator | 2026-04-05 00:47:55 | INFO  | Task bcbda999-be68-4903-8396-551ca0f1c657 is in state STARTED 2026-04-05 00:47:55.479438 | orchestrator | 2026-04-05 00:47:55 | INFO  | Task 7fddfc23-0ecf-4242-844b-31339d8bcb7f is in state STARTED 2026-04-05 00:47:55.484312 | orchestrator | 2026-04-05 00:47:55 | INFO  | Task 6dccdfcc-05bb-41c0-9938-f8e4b07aa37f is in state STARTED 2026-04-05 00:47:55.485788 | orchestrator | 2026-04-05 00:47:55 | INFO  | Task 4ffece65-dcd0-49c8-a564-a711c224a09b is in state STARTED 2026-04-05 00:47:55.489853 | orchestrator | 2026-04-05 00:47:55 | INFO  | Task 469f29b3-4249-4c0e-a955-a13bc2fd1c9c is in state STARTED 2026-04-05 00:47:55.489900 | orchestrator | 2026-04-05 00:47:55 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:47:58.552552 | orchestrator | 2026-04-05 00:47:58 | INFO  | Task bcbda999-be68-4903-8396-551ca0f1c657 is in state STARTED 2026-04-05 00:47:58.553394 | orchestrator | 2026-04-05 00:47:58 | INFO  | Task 7fddfc23-0ecf-4242-844b-31339d8bcb7f is in state STARTED 2026-04-05 00:47:58.554669 | orchestrator | 2026-04-05 00:47:58 | INFO  | Task 6dccdfcc-05bb-41c0-9938-f8e4b07aa37f is in state STARTED 2026-04-05 00:47:58.556494 | orchestrator | 2026-04-05 00:47:58 | INFO  | Task 4ffece65-dcd0-49c8-a564-a711c224a09b is in state STARTED 2026-04-05 00:47:58.558184 | orchestrator | 2026-04-05 00:47:58 | INFO  | Task 469f29b3-4249-4c0e-a955-a13bc2fd1c9c is in state STARTED 2026-04-05 00:47:58.558235 | orchestrator | 2026-04-05 00:47:58 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:48:01.729773 | orchestrator | 2026-04-05 00:48:01 | INFO  | Task bcbda999-be68-4903-8396-551ca0f1c657 is in state STARTED 2026-04-05 00:48:01.729851 | orchestrator | 2026-04-05 00:48:01 | INFO  | Task 7fddfc23-0ecf-4242-844b-31339d8bcb7f is in state STARTED 2026-04-05 00:48:01.732296 | orchestrator | 2026-04-05 00:48:01 | INFO  | Task 6dccdfcc-05bb-41c0-9938-f8e4b07aa37f is in state STARTED 2026-04-05 00:48:01.734007 | orchestrator | 2026-04-05 00:48:01 | INFO  | Task 4ffece65-dcd0-49c8-a564-a711c224a09b is in state STARTED 2026-04-05 00:48:01.734578 | orchestrator | 2026-04-05 00:48:01 | INFO  | Task 469f29b3-4249-4c0e-a955-a13bc2fd1c9c is in state STARTED 2026-04-05 00:48:01.734601 | orchestrator | 2026-04-05 00:48:01 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:48:04.814400 | orchestrator | 2026-04-05 00:48:04 | INFO  | Task bcbda999-be68-4903-8396-551ca0f1c657 is in state STARTED 2026-04-05 00:48:04.814731 | orchestrator | 2026-04-05 00:48:04 | INFO  | Task 7fddfc23-0ecf-4242-844b-31339d8bcb7f is in state STARTED 2026-04-05 00:48:04.816602 | orchestrator | 2026-04-05 00:48:04 | INFO  | Task 6dccdfcc-05bb-41c0-9938-f8e4b07aa37f is in state STARTED 2026-04-05 00:48:04.816636 | orchestrator | 2026-04-05 00:48:04 | INFO  | Task 4ffece65-dcd0-49c8-a564-a711c224a09b is in state STARTED 2026-04-05 00:48:04.817223 | orchestrator | 2026-04-05 00:48:04 | INFO  | Task 469f29b3-4249-4c0e-a955-a13bc2fd1c9c is in state STARTED 2026-04-05 00:48:04.817248 | orchestrator | 2026-04-05 00:48:04 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:48:07.888634 | orchestrator | 2026-04-05 00:48:07 | INFO  | Task bcbda999-be68-4903-8396-551ca0f1c657 is in state STARTED 2026-04-05 00:48:07.893902 | orchestrator | 2026-04-05 00:48:07 | INFO  | Task 7fddfc23-0ecf-4242-844b-31339d8bcb7f is in state STARTED 2026-04-05 00:48:07.899879 | orchestrator | 2026-04-05 00:48:07 | INFO  | Task 6dccdfcc-05bb-41c0-9938-f8e4b07aa37f is in state STARTED 2026-04-05 00:48:07.902195 | orchestrator | 2026-04-05 00:48:07 | INFO  | Task 4ffece65-dcd0-49c8-a564-a711c224a09b is in state STARTED 2026-04-05 00:48:07.903210 | orchestrator | 2026-04-05 00:48:07 | INFO  | Task 469f29b3-4249-4c0e-a955-a13bc2fd1c9c is in state STARTED 2026-04-05 00:48:07.904329 | orchestrator | 2026-04-05 00:48:07 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:48:10.953576 | orchestrator | 2026-04-05 00:48:10 | INFO  | Task bcbda999-be68-4903-8396-551ca0f1c657 is in state STARTED 2026-04-05 00:48:10.963279 | orchestrator | 2026-04-05 00:48:10 | INFO  | Task 7fddfc23-0ecf-4242-844b-31339d8bcb7f is in state STARTED 2026-04-05 00:48:10.963335 | orchestrator | 2026-04-05 00:48:10 | INFO  | Task 6dccdfcc-05bb-41c0-9938-f8e4b07aa37f is in state STARTED 2026-04-05 00:48:10.963347 | orchestrator | 2026-04-05 00:48:10 | INFO  | Task 4ffece65-dcd0-49c8-a564-a711c224a09b is in state STARTED 2026-04-05 00:48:10.963359 | orchestrator | 2026-04-05 00:48:10 | INFO  | Task 469f29b3-4249-4c0e-a955-a13bc2fd1c9c is in state STARTED 2026-04-05 00:48:10.963370 | orchestrator | 2026-04-05 00:48:10 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:48:14.041744 | orchestrator | 2026-04-05 00:48:14 | INFO  | Task bcbda999-be68-4903-8396-551ca0f1c657 is in state STARTED 2026-04-05 00:48:14.047390 | orchestrator | 2026-04-05 00:48:14 | INFO  | Task 7fddfc23-0ecf-4242-844b-31339d8bcb7f is in state STARTED 2026-04-05 00:48:14.049759 | orchestrator | 2026-04-05 00:48:14 | INFO  | Task 6dccdfcc-05bb-41c0-9938-f8e4b07aa37f is in state STARTED 2026-04-05 00:48:14.050680 | orchestrator | 2026-04-05 00:48:14 | INFO  | Task 4ffece65-dcd0-49c8-a564-a711c224a09b is in state STARTED 2026-04-05 00:48:14.060412 | orchestrator | 2026-04-05 00:48:14 | INFO  | Task 469f29b3-4249-4c0e-a955-a13bc2fd1c9c is in state STARTED 2026-04-05 00:48:14.060471 | orchestrator | 2026-04-05 00:48:14 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:48:17.125622 | orchestrator | 2026-04-05 00:48:17 | INFO  | Task bcbda999-be68-4903-8396-551ca0f1c657 is in state STARTED 2026-04-05 00:48:17.125721 | orchestrator | 2026-04-05 00:48:17 | INFO  | Task 7fddfc23-0ecf-4242-844b-31339d8bcb7f is in state STARTED 2026-04-05 00:48:17.128422 | orchestrator | 2026-04-05 00:48:17 | INFO  | Task 6dccdfcc-05bb-41c0-9938-f8e4b07aa37f is in state STARTED 2026-04-05 00:48:17.128472 | orchestrator | 2026-04-05 00:48:17 | INFO  | Task 4ffece65-dcd0-49c8-a564-a711c224a09b is in state STARTED 2026-04-05 00:48:17.130368 | orchestrator | 2026-04-05 00:48:17 | INFO  | Task 469f29b3-4249-4c0e-a955-a13bc2fd1c9c is in state STARTED 2026-04-05 00:48:17.130421 | orchestrator | 2026-04-05 00:48:17 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:48:20.196820 | orchestrator | 2026-04-05 00:48:20 | INFO  | Task bcbda999-be68-4903-8396-551ca0f1c657 is in state STARTED 2026-04-05 00:48:20.200623 | orchestrator | 2026-04-05 00:48:20 | INFO  | Task 7fddfc23-0ecf-4242-844b-31339d8bcb7f is in state STARTED 2026-04-05 00:48:20.201088 | orchestrator | 2026-04-05 00:48:20 | INFO  | Task 6dccdfcc-05bb-41c0-9938-f8e4b07aa37f is in state STARTED 2026-04-05 00:48:20.201658 | orchestrator | 2026-04-05 00:48:20 | INFO  | Task 4ffece65-dcd0-49c8-a564-a711c224a09b is in state SUCCESS 2026-04-05 00:48:20.202584 | orchestrator | 2026-04-05 00:48:20.202679 | orchestrator | 2026-04-05 00:48:20.202698 | orchestrator | PLAY [Apply role homer] ******************************************************** 2026-04-05 00:48:20.202711 | orchestrator | 2026-04-05 00:48:20.202723 | orchestrator | TASK [osism.services.homer : Inform about new parameter homer_url_opensearch_dashboards] *** 2026-04-05 00:48:20.202734 | orchestrator | Sunday 05 April 2026 00:46:53 +0000 (0:00:00.686) 0:00:00.686 ********** 2026-04-05 00:48:20.202745 | orchestrator | ok: [testbed-manager] => { 2026-04-05 00:48:20.202758 | orchestrator |  "msg": "The support for the homer_url_kibana has been removed. Please use the homer_url_opensearch_dashboards parameter." 2026-04-05 00:48:20.202770 | orchestrator | } 2026-04-05 00:48:20.202781 | orchestrator | 2026-04-05 00:48:20.202793 | orchestrator | TASK [osism.services.homer : Create traefik external network] ****************** 2026-04-05 00:48:20.202804 | orchestrator | Sunday 05 April 2026 00:46:54 +0000 (0:00:00.325) 0:00:01.012 ********** 2026-04-05 00:48:20.202815 | orchestrator | ok: [testbed-manager] 2026-04-05 00:48:20.202852 | orchestrator | 2026-04-05 00:48:20.202864 | orchestrator | TASK [osism.services.homer : Create required directories] ********************** 2026-04-05 00:48:20.202875 | orchestrator | Sunday 05 April 2026 00:46:55 +0000 (0:00:01.611) 0:00:02.623 ********** 2026-04-05 00:48:20.202886 | orchestrator | changed: [testbed-manager] => (item=/opt/homer/configuration) 2026-04-05 00:48:20.202896 | orchestrator | ok: [testbed-manager] => (item=/opt/homer) 2026-04-05 00:48:20.202907 | orchestrator | 2026-04-05 00:48:20.202918 | orchestrator | TASK [osism.services.homer : Copy config.yml configuration file] *************** 2026-04-05 00:48:20.202942 | orchestrator | Sunday 05 April 2026 00:46:57 +0000 (0:00:01.502) 0:00:04.125 ********** 2026-04-05 00:48:20.202953 | orchestrator | changed: [testbed-manager] 2026-04-05 00:48:20.202964 | orchestrator | 2026-04-05 00:48:20.202974 | orchestrator | TASK [osism.services.homer : Copy docker-compose.yml file] ********************* 2026-04-05 00:48:20.202985 | orchestrator | Sunday 05 April 2026 00:47:00 +0000 (0:00:02.762) 0:00:06.888 ********** 2026-04-05 00:48:20.202996 | orchestrator | changed: [testbed-manager] 2026-04-05 00:48:20.203006 | orchestrator | 2026-04-05 00:48:20.203016 | orchestrator | TASK [osism.services.homer : Manage homer service] ***************************** 2026-04-05 00:48:20.203027 | orchestrator | Sunday 05 April 2026 00:47:01 +0000 (0:00:01.337) 0:00:08.225 ********** 2026-04-05 00:48:20.203038 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage homer service (10 retries left). 2026-04-05 00:48:20.203048 | orchestrator | ok: [testbed-manager] 2026-04-05 00:48:20.203059 | orchestrator | 2026-04-05 00:48:20.203070 | orchestrator | RUNNING HANDLER [osism.services.homer : Restart homer service] ***************** 2026-04-05 00:48:20.203080 | orchestrator | Sunday 05 April 2026 00:47:28 +0000 (0:00:26.691) 0:00:34.916 ********** 2026-04-05 00:48:20.203091 | orchestrator | changed: [testbed-manager] 2026-04-05 00:48:20.203101 | orchestrator | 2026-04-05 00:48:20.203112 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-05 00:48:20.203123 | orchestrator | testbed-manager : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-05 00:48:20.203135 | orchestrator | 2026-04-05 00:48:20.203146 | orchestrator | 2026-04-05 00:48:20.203157 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-05 00:48:20.203167 | orchestrator | Sunday 05 April 2026 00:47:30 +0000 (0:00:02.727) 0:00:37.652 ********** 2026-04-05 00:48:20.203178 | orchestrator | =============================================================================== 2026-04-05 00:48:20.203188 | orchestrator | osism.services.homer : Manage homer service ---------------------------- 26.69s 2026-04-05 00:48:20.203199 | orchestrator | osism.services.homer : Copy config.yml configuration file --------------- 2.76s 2026-04-05 00:48:20.203209 | orchestrator | osism.services.homer : Restart homer service ---------------------------- 2.74s 2026-04-05 00:48:20.203220 | orchestrator | osism.services.homer : Create traefik external network ------------------ 1.61s 2026-04-05 00:48:20.203230 | orchestrator | osism.services.homer : Create required directories ---------------------- 1.51s 2026-04-05 00:48:20.203241 | orchestrator | osism.services.homer : Copy docker-compose.yml file --------------------- 1.34s 2026-04-05 00:48:20.203251 | orchestrator | osism.services.homer : Inform about new parameter homer_url_opensearch_dashboards --- 0.33s 2026-04-05 00:48:20.203262 | orchestrator | 2026-04-05 00:48:20.203272 | orchestrator | 2026-04-05 00:48:20.203283 | orchestrator | PLAY [Apply role openstackclient] ********************************************** 2026-04-05 00:48:20.203293 | orchestrator | 2026-04-05 00:48:20.203304 | orchestrator | TASK [osism.services.openstackclient : Include tasks] ************************** 2026-04-05 00:48:20.203314 | orchestrator | Sunday 05 April 2026 00:46:53 +0000 (0:00:00.608) 0:00:00.609 ********** 2026-04-05 00:48:20.203326 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/openstackclient/tasks/container-Debian-family.yml for testbed-manager 2026-04-05 00:48:20.203337 | orchestrator | 2026-04-05 00:48:20.203348 | orchestrator | TASK [osism.services.openstackclient : Create required directories] ************ 2026-04-05 00:48:20.203364 | orchestrator | Sunday 05 April 2026 00:46:53 +0000 (0:00:00.291) 0:00:00.900 ********** 2026-04-05 00:48:20.203403 | orchestrator | changed: [testbed-manager] => (item=/opt/configuration/environments/openstack) 2026-04-05 00:48:20.203427 | orchestrator | changed: [testbed-manager] => (item=/opt/openstackclient/data) 2026-04-05 00:48:20.203444 | orchestrator | ok: [testbed-manager] => (item=/opt/openstackclient) 2026-04-05 00:48:20.203461 | orchestrator | 2026-04-05 00:48:20.203480 | orchestrator | TASK [osism.services.openstackclient : Copy docker-compose.yml file] *********** 2026-04-05 00:48:20.203497 | orchestrator | Sunday 05 April 2026 00:46:56 +0000 (0:00:02.695) 0:00:03.595 ********** 2026-04-05 00:48:20.203546 | orchestrator | changed: [testbed-manager] 2026-04-05 00:48:20.203564 | orchestrator | 2026-04-05 00:48:20.203582 | orchestrator | TASK [osism.services.openstackclient : Manage openstackclient service] ********* 2026-04-05 00:48:20.203601 | orchestrator | Sunday 05 April 2026 00:46:58 +0000 (0:00:02.267) 0:00:05.863 ********** 2026-04-05 00:48:20.203642 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage openstackclient service (10 retries left). 2026-04-05 00:48:20.203659 | orchestrator | ok: [testbed-manager] 2026-04-05 00:48:20.203670 | orchestrator | 2026-04-05 00:48:20.203680 | orchestrator | TASK [osism.services.openstackclient : Copy openstack wrapper script] ********** 2026-04-05 00:48:20.203691 | orchestrator | Sunday 05 April 2026 00:47:35 +0000 (0:00:37.016) 0:00:42.879 ********** 2026-04-05 00:48:20.203701 | orchestrator | changed: [testbed-manager] 2026-04-05 00:48:20.203712 | orchestrator | 2026-04-05 00:48:20.203723 | orchestrator | TASK [osism.services.openstackclient : Remove ospurge wrapper script] ********** 2026-04-05 00:48:20.203733 | orchestrator | Sunday 05 April 2026 00:47:37 +0000 (0:00:01.938) 0:00:44.818 ********** 2026-04-05 00:48:20.203744 | orchestrator | ok: [testbed-manager] 2026-04-05 00:48:20.203754 | orchestrator | 2026-04-05 00:48:20.203765 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Restart openstackclient service] *** 2026-04-05 00:48:20.203776 | orchestrator | Sunday 05 April 2026 00:47:38 +0000 (0:00:00.760) 0:00:45.579 ********** 2026-04-05 00:48:20.203786 | orchestrator | changed: [testbed-manager] 2026-04-05 00:48:20.203797 | orchestrator | 2026-04-05 00:48:20.203808 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Ensure that all containers are up] *** 2026-04-05 00:48:20.203818 | orchestrator | Sunday 05 April 2026 00:47:43 +0000 (0:00:04.466) 0:00:50.045 ********** 2026-04-05 00:48:20.203828 | orchestrator | changed: [testbed-manager] 2026-04-05 00:48:20.203839 | orchestrator | 2026-04-05 00:48:20.203849 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Wait for an healthy service] *** 2026-04-05 00:48:20.203860 | orchestrator | Sunday 05 April 2026 00:47:44 +0000 (0:00:01.158) 0:00:51.204 ********** 2026-04-05 00:48:20.203878 | orchestrator | changed: [testbed-manager] 2026-04-05 00:48:20.203889 | orchestrator | 2026-04-05 00:48:20.203899 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Copy bash completion script] *** 2026-04-05 00:48:20.203911 | orchestrator | Sunday 05 April 2026 00:47:45 +0000 (0:00:00.894) 0:00:52.099 ********** 2026-04-05 00:48:20.203921 | orchestrator | ok: [testbed-manager] 2026-04-05 00:48:20.203932 | orchestrator | 2026-04-05 00:48:20.203943 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-05 00:48:20.203954 | orchestrator | testbed-manager : ok=10  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-05 00:48:20.203965 | orchestrator | 2026-04-05 00:48:20.203976 | orchestrator | 2026-04-05 00:48:20.203987 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-05 00:48:20.203997 | orchestrator | Sunday 05 April 2026 00:47:46 +0000 (0:00:01.328) 0:00:53.428 ********** 2026-04-05 00:48:20.204008 | orchestrator | =============================================================================== 2026-04-05 00:48:20.204018 | orchestrator | osism.services.openstackclient : Manage openstackclient service -------- 37.02s 2026-04-05 00:48:20.204029 | orchestrator | osism.services.openstackclient : Restart openstackclient service -------- 4.47s 2026-04-05 00:48:20.204039 | orchestrator | osism.services.openstackclient : Create required directories ------------ 2.70s 2026-04-05 00:48:20.204059 | orchestrator | osism.services.openstackclient : Copy docker-compose.yml file ----------- 2.27s 2026-04-05 00:48:20.204069 | orchestrator | osism.services.openstackclient : Copy openstack wrapper script ---------- 1.94s 2026-04-05 00:48:20.204080 | orchestrator | osism.services.openstackclient : Copy bash completion script ------------ 1.33s 2026-04-05 00:48:20.204091 | orchestrator | osism.services.openstackclient : Ensure that all containers are up ------ 1.16s 2026-04-05 00:48:20.204101 | orchestrator | osism.services.openstackclient : Wait for an healthy service ------------ 0.89s 2026-04-05 00:48:20.204112 | orchestrator | osism.services.openstackclient : Remove ospurge wrapper script ---------- 0.76s 2026-04-05 00:48:20.204122 | orchestrator | osism.services.openstackclient : Include tasks -------------------------- 0.29s 2026-04-05 00:48:20.204132 | orchestrator | 2026-04-05 00:48:20.204143 | orchestrator | 2026-04-05 00:48:20.204154 | orchestrator | PLAY [Apply role phpmyadmin] *************************************************** 2026-04-05 00:48:20.204164 | orchestrator | 2026-04-05 00:48:20.204175 | orchestrator | TASK [osism.services.phpmyadmin : Create traefik external network] ************* 2026-04-05 00:48:20.204186 | orchestrator | Sunday 05 April 2026 00:47:13 +0000 (0:00:00.368) 0:00:00.368 ********** 2026-04-05 00:48:20.204197 | orchestrator | ok: [testbed-manager] 2026-04-05 00:48:20.204208 | orchestrator | 2026-04-05 00:48:20.204219 | orchestrator | TASK [osism.services.phpmyadmin : Create required directories] ***************** 2026-04-05 00:48:20.204229 | orchestrator | Sunday 05 April 2026 00:47:15 +0000 (0:00:02.356) 0:00:02.725 ********** 2026-04-05 00:48:20.204240 | orchestrator | changed: [testbed-manager] => (item=/opt/phpmyadmin) 2026-04-05 00:48:20.204250 | orchestrator | 2026-04-05 00:48:20.204261 | orchestrator | TASK [osism.services.phpmyadmin : Copy docker-compose.yml file] **************** 2026-04-05 00:48:20.204272 | orchestrator | Sunday 05 April 2026 00:47:16 +0000 (0:00:00.659) 0:00:03.384 ********** 2026-04-05 00:48:20.204282 | orchestrator | changed: [testbed-manager] 2026-04-05 00:48:20.204293 | orchestrator | 2026-04-05 00:48:20.204304 | orchestrator | TASK [osism.services.phpmyadmin : Manage phpmyadmin service] ******************* 2026-04-05 00:48:20.204314 | orchestrator | Sunday 05 April 2026 00:47:18 +0000 (0:00:01.664) 0:00:05.049 ********** 2026-04-05 00:48:20.204325 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage phpmyadmin service (10 retries left). 2026-04-05 00:48:20.204336 | orchestrator | ok: [testbed-manager] 2026-04-05 00:48:20.204346 | orchestrator | 2026-04-05 00:48:20.204357 | orchestrator | RUNNING HANDLER [osism.services.phpmyadmin : Restart phpmyadmin service] ******* 2026-04-05 00:48:20.204368 | orchestrator | Sunday 05 April 2026 00:48:14 +0000 (0:00:56.675) 0:01:01.724 ********** 2026-04-05 00:48:20.204378 | orchestrator | changed: [testbed-manager] 2026-04-05 00:48:20.204389 | orchestrator | 2026-04-05 00:48:20.204399 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-05 00:48:20.204410 | orchestrator | testbed-manager : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-05 00:48:20.204421 | orchestrator | 2026-04-05 00:48:20.204431 | orchestrator | 2026-04-05 00:48:20.204442 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-05 00:48:20.204460 | orchestrator | Sunday 05 April 2026 00:48:18 +0000 (0:00:04.018) 0:01:05.743 ********** 2026-04-05 00:48:20.204471 | orchestrator | =============================================================================== 2026-04-05 00:48:20.204482 | orchestrator | osism.services.phpmyadmin : Manage phpmyadmin service ------------------ 56.68s 2026-04-05 00:48:20.204492 | orchestrator | osism.services.phpmyadmin : Restart phpmyadmin service ------------------ 4.02s 2026-04-05 00:48:20.204503 | orchestrator | osism.services.phpmyadmin : Create traefik external network ------------- 2.36s 2026-04-05 00:48:20.204539 | orchestrator | osism.services.phpmyadmin : Copy docker-compose.yml file ---------------- 1.66s 2026-04-05 00:48:20.204550 | orchestrator | osism.services.phpmyadmin : Create required directories ----------------- 0.66s 2026-04-05 00:48:20.205222 | orchestrator | 2026-04-05 00:48:20 | INFO  | Task 469f29b3-4249-4c0e-a955-a13bc2fd1c9c is in state STARTED 2026-04-05 00:48:20.205324 | orchestrator | 2026-04-05 00:48:20 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:48:23.248043 | orchestrator | 2026-04-05 00:48:23 | INFO  | Task bcbda999-be68-4903-8396-551ca0f1c657 is in state STARTED 2026-04-05 00:48:23.249772 | orchestrator | 2026-04-05 00:48:23 | INFO  | Task 7fddfc23-0ecf-4242-844b-31339d8bcb7f is in state STARTED 2026-04-05 00:48:23.249921 | orchestrator | 2026-04-05 00:48:23 | INFO  | Task 6dccdfcc-05bb-41c0-9938-f8e4b07aa37f is in state STARTED 2026-04-05 00:48:23.249939 | orchestrator | 2026-04-05 00:48:23 | INFO  | Task 469f29b3-4249-4c0e-a955-a13bc2fd1c9c is in state STARTED 2026-04-05 00:48:23.249958 | orchestrator | 2026-04-05 00:48:23 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:48:26.304485 | orchestrator | 2026-04-05 00:48:26 | INFO  | Task bcbda999-be68-4903-8396-551ca0f1c657 is in state STARTED 2026-04-05 00:48:26.305989 | orchestrator | 2026-04-05 00:48:26 | INFO  | Task 7fddfc23-0ecf-4242-844b-31339d8bcb7f is in state STARTED 2026-04-05 00:48:26.308608 | orchestrator | 2026-04-05 00:48:26 | INFO  | Task 6dccdfcc-05bb-41c0-9938-f8e4b07aa37f is in state STARTED 2026-04-05 00:48:26.310905 | orchestrator | 2026-04-05 00:48:26 | INFO  | Task 469f29b3-4249-4c0e-a955-a13bc2fd1c9c is in state STARTED 2026-04-05 00:48:26.311026 | orchestrator | 2026-04-05 00:48:26 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:48:29.355571 | orchestrator | 2026-04-05 00:48:29 | INFO  | Task bcbda999-be68-4903-8396-551ca0f1c657 is in state STARTED 2026-04-05 00:48:29.357134 | orchestrator | 2026-04-05 00:48:29 | INFO  | Task 7fddfc23-0ecf-4242-844b-31339d8bcb7f is in state STARTED 2026-04-05 00:48:29.359127 | orchestrator | 2026-04-05 00:48:29 | INFO  | Task 6dccdfcc-05bb-41c0-9938-f8e4b07aa37f is in state STARTED 2026-04-05 00:48:29.362185 | orchestrator | 2026-04-05 00:48:29 | INFO  | Task 469f29b3-4249-4c0e-a955-a13bc2fd1c9c is in state STARTED 2026-04-05 00:48:29.362262 | orchestrator | 2026-04-05 00:48:29 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:48:32.395827 | orchestrator | 2026-04-05 00:48:32 | INFO  | Task bcbda999-be68-4903-8396-551ca0f1c657 is in state STARTED 2026-04-05 00:48:32.396018 | orchestrator | 2026-04-05 00:48:32 | INFO  | Task 7fddfc23-0ecf-4242-844b-31339d8bcb7f is in state STARTED 2026-04-05 00:48:32.399262 | orchestrator | 2026-04-05 00:48:32 | INFO  | Task 6dccdfcc-05bb-41c0-9938-f8e4b07aa37f is in state STARTED 2026-04-05 00:48:32.399783 | orchestrator | 2026-04-05 00:48:32 | INFO  | Task 469f29b3-4249-4c0e-a955-a13bc2fd1c9c is in state STARTED 2026-04-05 00:48:32.399816 | orchestrator | 2026-04-05 00:48:32 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:48:35.430411 | orchestrator | 2026-04-05 00:48:35 | INFO  | Task bcbda999-be68-4903-8396-551ca0f1c657 is in state STARTED 2026-04-05 00:48:35.432302 | orchestrator | 2026-04-05 00:48:35 | INFO  | Task 7fddfc23-0ecf-4242-844b-31339d8bcb7f is in state STARTED 2026-04-05 00:48:35.434457 | orchestrator | 2026-04-05 00:48:35 | INFO  | Task 6dccdfcc-05bb-41c0-9938-f8e4b07aa37f is in state STARTED 2026-04-05 00:48:35.435957 | orchestrator | 2026-04-05 00:48:35 | INFO  | Task 469f29b3-4249-4c0e-a955-a13bc2fd1c9c is in state SUCCESS 2026-04-05 00:48:35.435983 | orchestrator | 2026-04-05 00:48:35 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:48:35.436383 | orchestrator | 2026-04-05 00:48:35.436393 | orchestrator | 2026-04-05 00:48:35.436398 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-05 00:48:35.436403 | orchestrator | 2026-04-05 00:48:35.436407 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-05 00:48:35.436412 | orchestrator | Sunday 05 April 2026 00:46:53 +0000 (0:00:00.533) 0:00:00.533 ********** 2026-04-05 00:48:35.436427 | orchestrator | changed: [testbed-manager] => (item=enable_netdata_True) 2026-04-05 00:48:35.436432 | orchestrator | changed: [testbed-node-0] => (item=enable_netdata_True) 2026-04-05 00:48:35.436437 | orchestrator | changed: [testbed-node-1] => (item=enable_netdata_True) 2026-04-05 00:48:35.436441 | orchestrator | changed: [testbed-node-2] => (item=enable_netdata_True) 2026-04-05 00:48:35.436446 | orchestrator | changed: [testbed-node-3] => (item=enable_netdata_True) 2026-04-05 00:48:35.436450 | orchestrator | changed: [testbed-node-4] => (item=enable_netdata_True) 2026-04-05 00:48:35.436454 | orchestrator | changed: [testbed-node-5] => (item=enable_netdata_True) 2026-04-05 00:48:35.436459 | orchestrator | 2026-04-05 00:48:35.436463 | orchestrator | PLAY [Apply role netdata] ****************************************************** 2026-04-05 00:48:35.436468 | orchestrator | 2026-04-05 00:48:35.436472 | orchestrator | TASK [osism.services.netdata : Include distribution specific install tasks] **** 2026-04-05 00:48:35.436477 | orchestrator | Sunday 05 April 2026 00:46:55 +0000 (0:00:02.370) 0:00:02.903 ********** 2026-04-05 00:48:35.436491 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-05 00:48:35.436500 | orchestrator | 2026-04-05 00:48:35.436621 | orchestrator | TASK [osism.services.netdata : Remove old architecture-dependent repository] *** 2026-04-05 00:48:35.436634 | orchestrator | Sunday 05 April 2026 00:46:57 +0000 (0:00:01.937) 0:00:04.841 ********** 2026-04-05 00:48:35.436639 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:48:35.436644 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:48:35.436656 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:48:35.436661 | orchestrator | ok: [testbed-manager] 2026-04-05 00:48:35.436671 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:48:35.436676 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:48:35.436681 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:48:35.436685 | orchestrator | 2026-04-05 00:48:35.436690 | orchestrator | TASK [osism.services.netdata : Install apt-transport-https package] ************ 2026-04-05 00:48:35.436695 | orchestrator | Sunday 05 April 2026 00:47:00 +0000 (0:00:03.188) 0:00:08.029 ********** 2026-04-05 00:48:35.436700 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:48:35.436704 | orchestrator | ok: [testbed-manager] 2026-04-05 00:48:35.436709 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:48:35.436713 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:48:35.436719 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:48:35.436728 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:48:35.436732 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:48:35.436737 | orchestrator | 2026-04-05 00:48:35.436742 | orchestrator | TASK [osism.services.netdata : Add repository gpg key] ************************* 2026-04-05 00:48:35.436746 | orchestrator | Sunday 05 April 2026 00:47:03 +0000 (0:00:03.262) 0:00:11.291 ********** 2026-04-05 00:48:35.436751 | orchestrator | changed: [testbed-manager] 2026-04-05 00:48:35.436756 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:48:35.436760 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:48:35.436765 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:48:35.436769 | orchestrator | changed: [testbed-node-3] 2026-04-05 00:48:35.436774 | orchestrator | changed: [testbed-node-4] 2026-04-05 00:48:35.436778 | orchestrator | changed: [testbed-node-5] 2026-04-05 00:48:35.436782 | orchestrator | 2026-04-05 00:48:35.436788 | orchestrator | TASK [osism.services.netdata : Add repository] ********************************* 2026-04-05 00:48:35.436796 | orchestrator | Sunday 05 April 2026 00:47:06 +0000 (0:00:02.270) 0:00:13.562 ********** 2026-04-05 00:48:35.436801 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:48:35.436806 | orchestrator | changed: [testbed-node-4] 2026-04-05 00:48:35.436810 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:48:35.436815 | orchestrator | changed: [testbed-node-3] 2026-04-05 00:48:35.436819 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:48:35.436824 | orchestrator | changed: [testbed-node-5] 2026-04-05 00:48:35.436828 | orchestrator | changed: [testbed-manager] 2026-04-05 00:48:35.436839 | orchestrator | 2026-04-05 00:48:35.436844 | orchestrator | TASK [osism.services.netdata : Install package netdata] ************************ 2026-04-05 00:48:35.436848 | orchestrator | Sunday 05 April 2026 00:47:19 +0000 (0:00:13.390) 0:00:26.952 ********** 2026-04-05 00:48:35.436853 | orchestrator | changed: [testbed-node-4] 2026-04-05 00:48:35.436857 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:48:35.436862 | orchestrator | changed: [testbed-node-3] 2026-04-05 00:48:35.436883 | orchestrator | changed: [testbed-node-5] 2026-04-05 00:48:35.436888 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:48:35.436892 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:48:35.436897 | orchestrator | changed: [testbed-manager] 2026-04-05 00:48:35.436901 | orchestrator | 2026-04-05 00:48:35.436906 | orchestrator | TASK [osism.services.netdata : Include config tasks] *************************** 2026-04-05 00:48:35.436910 | orchestrator | Sunday 05 April 2026 00:48:02 +0000 (0:00:42.833) 0:01:09.786 ********** 2026-04-05 00:48:35.436916 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/config.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-05 00:48:35.436921 | orchestrator | 2026-04-05 00:48:35.436926 | orchestrator | TASK [osism.services.netdata : Copy configuration files] *********************** 2026-04-05 00:48:35.436930 | orchestrator | Sunday 05 April 2026 00:48:05 +0000 (0:00:02.733) 0:01:12.520 ********** 2026-04-05 00:48:35.436935 | orchestrator | changed: [testbed-node-0] => (item=netdata.conf) 2026-04-05 00:48:35.436940 | orchestrator | changed: [testbed-manager] => (item=netdata.conf) 2026-04-05 00:48:35.436944 | orchestrator | changed: [testbed-node-1] => (item=netdata.conf) 2026-04-05 00:48:35.436949 | orchestrator | changed: [testbed-node-2] => (item=netdata.conf) 2026-04-05 00:48:35.436960 | orchestrator | changed: [testbed-node-3] => (item=netdata.conf) 2026-04-05 00:48:35.436965 | orchestrator | changed: [testbed-node-4] => (item=netdata.conf) 2026-04-05 00:48:35.436970 | orchestrator | changed: [testbed-node-5] => (item=netdata.conf) 2026-04-05 00:48:35.436974 | orchestrator | changed: [testbed-node-1] => (item=stream.conf) 2026-04-05 00:48:35.436979 | orchestrator | changed: [testbed-node-0] => (item=stream.conf) 2026-04-05 00:48:35.436983 | orchestrator | changed: [testbed-node-3] => (item=stream.conf) 2026-04-05 00:48:35.436988 | orchestrator | changed: [testbed-manager] => (item=stream.conf) 2026-04-05 00:48:35.436992 | orchestrator | changed: [testbed-node-2] => (item=stream.conf) 2026-04-05 00:48:35.436998 | orchestrator | changed: [testbed-node-4] => (item=stream.conf) 2026-04-05 00:48:35.437004 | orchestrator | changed: [testbed-node-5] => (item=stream.conf) 2026-04-05 00:48:35.437009 | orchestrator | 2026-04-05 00:48:35.437014 | orchestrator | TASK [osism.services.netdata : Retrieve /etc/netdata/.opt-out-from-anonymous-statistics status] *** 2026-04-05 00:48:35.437020 | orchestrator | Sunday 05 April 2026 00:48:09 +0000 (0:00:04.755) 0:01:17.276 ********** 2026-04-05 00:48:35.437025 | orchestrator | ok: [testbed-manager] 2026-04-05 00:48:35.437031 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:48:35.437036 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:48:35.437041 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:48:35.437047 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:48:35.437052 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:48:35.437058 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:48:35.437063 | orchestrator | 2026-04-05 00:48:35.437068 | orchestrator | TASK [osism.services.netdata : Opt out from anonymous statistics] ************** 2026-04-05 00:48:35.437074 | orchestrator | Sunday 05 April 2026 00:48:11 +0000 (0:00:01.657) 0:01:18.933 ********** 2026-04-05 00:48:35.437079 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:48:35.437084 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:48:35.437089 | orchestrator | changed: [testbed-manager] 2026-04-05 00:48:35.437094 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:48:35.437099 | orchestrator | changed: [testbed-node-4] 2026-04-05 00:48:35.437105 | orchestrator | changed: [testbed-node-3] 2026-04-05 00:48:35.437113 | orchestrator | changed: [testbed-node-5] 2026-04-05 00:48:35.437118 | orchestrator | 2026-04-05 00:48:35.437124 | orchestrator | TASK [osism.services.netdata : Add netdata user to docker group] *************** 2026-04-05 00:48:35.437131 | orchestrator | Sunday 05 April 2026 00:48:13 +0000 (0:00:01.438) 0:01:20.371 ********** 2026-04-05 00:48:35.437136 | orchestrator | ok: [testbed-manager] 2026-04-05 00:48:35.437141 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:48:35.437147 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:48:35.437153 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:48:35.437161 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:48:35.437167 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:48:35.437172 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:48:35.437177 | orchestrator | 2026-04-05 00:48:35.437182 | orchestrator | TASK [osism.services.netdata : Manage service netdata] ************************* 2026-04-05 00:48:35.437188 | orchestrator | Sunday 05 April 2026 00:48:15 +0000 (0:00:02.098) 0:01:22.470 ********** 2026-04-05 00:48:35.437193 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:48:35.437198 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:48:35.437203 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:48:35.437208 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:48:35.437213 | orchestrator | ok: [testbed-manager] 2026-04-05 00:48:35.437218 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:48:35.437223 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:48:35.437228 | orchestrator | 2026-04-05 00:48:35.437236 | orchestrator | TASK [osism.services.netdata : Include host type specific tasks] *************** 2026-04-05 00:48:35.437243 | orchestrator | Sunday 05 April 2026 00:48:17 +0000 (0:00:02.410) 0:01:24.880 ********** 2026-04-05 00:48:35.437248 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/server.yml for testbed-manager 2026-04-05 00:48:35.437255 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/client.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-05 00:48:35.437261 | orchestrator | 2026-04-05 00:48:35.437266 | orchestrator | TASK [osism.services.netdata : Set sysctl vm.max_map_count parameter] ********** 2026-04-05 00:48:35.437271 | orchestrator | Sunday 05 April 2026 00:48:19 +0000 (0:00:01.763) 0:01:26.644 ********** 2026-04-05 00:48:35.437276 | orchestrator | changed: [testbed-manager] 2026-04-05 00:48:35.437282 | orchestrator | 2026-04-05 00:48:35.437287 | orchestrator | RUNNING HANDLER [osism.services.netdata : Restart service netdata] ************* 2026-04-05 00:48:35.437292 | orchestrator | Sunday 05 April 2026 00:48:21 +0000 (0:00:02.318) 0:01:28.963 ********** 2026-04-05 00:48:35.437298 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:48:35.437303 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:48:35.437308 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:48:35.437313 | orchestrator | changed: [testbed-node-3] 2026-04-05 00:48:35.437319 | orchestrator | changed: [testbed-node-4] 2026-04-05 00:48:35.437324 | orchestrator | changed: [testbed-node-5] 2026-04-05 00:48:35.437330 | orchestrator | changed: [testbed-manager] 2026-04-05 00:48:35.437335 | orchestrator | 2026-04-05 00:48:35.437341 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-05 00:48:35.437346 | orchestrator | testbed-manager : ok=16  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-05 00:48:35.437352 | orchestrator | testbed-node-0 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-05 00:48:35.437357 | orchestrator | testbed-node-1 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-05 00:48:35.437361 | orchestrator | testbed-node-2 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-05 00:48:35.437369 | orchestrator | testbed-node-3 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-05 00:48:35.437376 | orchestrator | testbed-node-4 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-05 00:48:35.437381 | orchestrator | testbed-node-5 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-05 00:48:35.437386 | orchestrator | 2026-04-05 00:48:35.437390 | orchestrator | 2026-04-05 00:48:35.437395 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-05 00:48:35.437399 | orchestrator | Sunday 05 April 2026 00:48:33 +0000 (0:00:11.611) 0:01:40.574 ********** 2026-04-05 00:48:35.437404 | orchestrator | =============================================================================== 2026-04-05 00:48:35.437409 | orchestrator | osism.services.netdata : Install package netdata ----------------------- 42.83s 2026-04-05 00:48:35.437417 | orchestrator | osism.services.netdata : Add repository -------------------------------- 13.39s 2026-04-05 00:48:35.437422 | orchestrator | osism.services.netdata : Restart service netdata ----------------------- 11.61s 2026-04-05 00:48:35.437426 | orchestrator | osism.services.netdata : Copy configuration files ----------------------- 4.76s 2026-04-05 00:48:35.437431 | orchestrator | osism.services.netdata : Install apt-transport-https package ------------ 3.26s 2026-04-05 00:48:35.437435 | orchestrator | osism.services.netdata : Remove old architecture-dependent repository --- 3.19s 2026-04-05 00:48:35.437440 | orchestrator | osism.services.netdata : Include config tasks --------------------------- 2.73s 2026-04-05 00:48:35.437444 | orchestrator | osism.services.netdata : Manage service netdata ------------------------- 2.41s 2026-04-05 00:48:35.437449 | orchestrator | Group hosts based on enabled services ----------------------------------- 2.37s 2026-04-05 00:48:35.437453 | orchestrator | osism.services.netdata : Set sysctl vm.max_map_count parameter ---------- 2.32s 2026-04-05 00:48:35.437458 | orchestrator | osism.services.netdata : Add repository gpg key ------------------------- 2.27s 2026-04-05 00:48:35.437464 | orchestrator | osism.services.netdata : Add netdata user to docker group --------------- 2.10s 2026-04-05 00:48:35.437469 | orchestrator | osism.services.netdata : Include distribution specific install tasks ---- 1.94s 2026-04-05 00:48:35.437473 | orchestrator | osism.services.netdata : Include host type specific tasks --------------- 1.76s 2026-04-05 00:48:35.437478 | orchestrator | osism.services.netdata : Retrieve /etc/netdata/.opt-out-from-anonymous-statistics status --- 1.66s 2026-04-05 00:48:35.437482 | orchestrator | osism.services.netdata : Opt out from anonymous statistics -------------- 1.44s 2026-04-05 00:48:38.493738 | orchestrator | 2026-04-05 00:48:38 | INFO  | Task bcbda999-be68-4903-8396-551ca0f1c657 is in state STARTED 2026-04-05 00:48:38.495633 | orchestrator | 2026-04-05 00:48:38 | INFO  | Task 7fddfc23-0ecf-4242-844b-31339d8bcb7f is in state STARTED 2026-04-05 00:48:38.496299 | orchestrator | 2026-04-05 00:48:38 | INFO  | Task 6dccdfcc-05bb-41c0-9938-f8e4b07aa37f is in state STARTED 2026-04-05 00:48:38.496334 | orchestrator | 2026-04-05 00:48:38 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:48:41.569650 | orchestrator | 2026-04-05 00:48:41 | INFO  | Task bcbda999-be68-4903-8396-551ca0f1c657 is in state STARTED 2026-04-05 00:48:41.574118 | orchestrator | 2026-04-05 00:48:41 | INFO  | Task 7fddfc23-0ecf-4242-844b-31339d8bcb7f is in state STARTED 2026-04-05 00:48:41.575454 | orchestrator | 2026-04-05 00:48:41 | INFO  | Task 6dccdfcc-05bb-41c0-9938-f8e4b07aa37f is in state STARTED 2026-04-05 00:48:41.576816 | orchestrator | 2026-04-05 00:48:41 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:48:44.630640 | orchestrator | 2026-04-05 00:48:44 | INFO  | Task bcbda999-be68-4903-8396-551ca0f1c657 is in state STARTED 2026-04-05 00:48:44.632637 | orchestrator | 2026-04-05 00:48:44 | INFO  | Task 7fddfc23-0ecf-4242-844b-31339d8bcb7f is in state STARTED 2026-04-05 00:48:44.635703 | orchestrator | 2026-04-05 00:48:44 | INFO  | Task 6dccdfcc-05bb-41c0-9938-f8e4b07aa37f is in state STARTED 2026-04-05 00:48:44.635768 | orchestrator | 2026-04-05 00:48:44 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:48:47.697380 | orchestrator | 2026-04-05 00:48:47 | INFO  | Task bcbda999-be68-4903-8396-551ca0f1c657 is in state STARTED 2026-04-05 00:48:47.717116 | orchestrator | 2026-04-05 00:48:47 | INFO  | Task 7fddfc23-0ecf-4242-844b-31339d8bcb7f is in state STARTED 2026-04-05 00:48:47.720176 | orchestrator | 2026-04-05 00:48:47 | INFO  | Task 6dccdfcc-05bb-41c0-9938-f8e4b07aa37f is in state STARTED 2026-04-05 00:48:47.720232 | orchestrator | 2026-04-05 00:48:47 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:48:50.773942 | orchestrator | 2026-04-05 00:48:50 | INFO  | Task bcbda999-be68-4903-8396-551ca0f1c657 is in state STARTED 2026-04-05 00:48:50.774741 | orchestrator | 2026-04-05 00:48:50 | INFO  | Task 7fddfc23-0ecf-4242-844b-31339d8bcb7f is in state STARTED 2026-04-05 00:48:50.776253 | orchestrator | 2026-04-05 00:48:50 | INFO  | Task 6dccdfcc-05bb-41c0-9938-f8e4b07aa37f is in state STARTED 2026-04-05 00:48:50.776288 | orchestrator | 2026-04-05 00:48:50 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:48:53.817095 | orchestrator | 2026-04-05 00:48:53 | INFO  | Task bcbda999-be68-4903-8396-551ca0f1c657 is in state STARTED 2026-04-05 00:48:53.817153 | orchestrator | 2026-04-05 00:48:53 | INFO  | Task 7fddfc23-0ecf-4242-844b-31339d8bcb7f is in state STARTED 2026-04-05 00:48:53.817927 | orchestrator | 2026-04-05 00:48:53 | INFO  | Task 6dccdfcc-05bb-41c0-9938-f8e4b07aa37f is in state STARTED 2026-04-05 00:48:53.817938 | orchestrator | 2026-04-05 00:48:53 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:48:56.872002 | orchestrator | 2026-04-05 00:48:56 | INFO  | Task bcbda999-be68-4903-8396-551ca0f1c657 is in state STARTED 2026-04-05 00:48:56.873108 | orchestrator | 2026-04-05 00:48:56 | INFO  | Task 7fddfc23-0ecf-4242-844b-31339d8bcb7f is in state STARTED 2026-04-05 00:48:56.877360 | orchestrator | 2026-04-05 00:48:56 | INFO  | Task 6dccdfcc-05bb-41c0-9938-f8e4b07aa37f is in state STARTED 2026-04-05 00:48:56.877428 | orchestrator | 2026-04-05 00:48:56 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:48:59.917883 | orchestrator | 2026-04-05 00:48:59 | INFO  | Task bcbda999-be68-4903-8396-551ca0f1c657 is in state STARTED 2026-04-05 00:48:59.918978 | orchestrator | 2026-04-05 00:48:59 | INFO  | Task 7fddfc23-0ecf-4242-844b-31339d8bcb7f is in state STARTED 2026-04-05 00:48:59.923195 | orchestrator | 2026-04-05 00:48:59 | INFO  | Task 6dccdfcc-05bb-41c0-9938-f8e4b07aa37f is in state STARTED 2026-04-05 00:48:59.923270 | orchestrator | 2026-04-05 00:48:59 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:49:02.967095 | orchestrator | 2026-04-05 00:49:02 | INFO  | Task bcbda999-be68-4903-8396-551ca0f1c657 is in state STARTED 2026-04-05 00:49:02.967177 | orchestrator | 2026-04-05 00:49:02 | INFO  | Task 7fddfc23-0ecf-4242-844b-31339d8bcb7f is in state STARTED 2026-04-05 00:49:02.967196 | orchestrator | 2026-04-05 00:49:02 | INFO  | Task 6dccdfcc-05bb-41c0-9938-f8e4b07aa37f is in state STARTED 2026-04-05 00:49:02.967216 | orchestrator | 2026-04-05 00:49:02 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:49:06.010823 | orchestrator | 2026-04-05 00:49:06 | INFO  | Task bcbda999-be68-4903-8396-551ca0f1c657 is in state STARTED 2026-04-05 00:49:06.013291 | orchestrator | 2026-04-05 00:49:06 | INFO  | Task 7fddfc23-0ecf-4242-844b-31339d8bcb7f is in state STARTED 2026-04-05 00:49:06.015545 | orchestrator | 2026-04-05 00:49:06 | INFO  | Task 6dccdfcc-05bb-41c0-9938-f8e4b07aa37f is in state STARTED 2026-04-05 00:49:06.015596 | orchestrator | 2026-04-05 00:49:06 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:49:09.048406 | orchestrator | 2026-04-05 00:49:09 | INFO  | Task bcbda999-be68-4903-8396-551ca0f1c657 is in state STARTED 2026-04-05 00:49:09.049887 | orchestrator | 2026-04-05 00:49:09 | INFO  | Task 7fddfc23-0ecf-4242-844b-31339d8bcb7f is in state STARTED 2026-04-05 00:49:09.051082 | orchestrator | 2026-04-05 00:49:09 | INFO  | Task 6dccdfcc-05bb-41c0-9938-f8e4b07aa37f is in state STARTED 2026-04-05 00:49:09.051115 | orchestrator | 2026-04-05 00:49:09 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:49:12.100987 | orchestrator | 2026-04-05 00:49:12 | INFO  | Task bcbda999-be68-4903-8396-551ca0f1c657 is in state STARTED 2026-04-05 00:49:12.102011 | orchestrator | 2026-04-05 00:49:12 | INFO  | Task 7fddfc23-0ecf-4242-844b-31339d8bcb7f is in state STARTED 2026-04-05 00:49:12.103260 | orchestrator | 2026-04-05 00:49:12 | INFO  | Task 6dccdfcc-05bb-41c0-9938-f8e4b07aa37f is in state STARTED 2026-04-05 00:49:12.103300 | orchestrator | 2026-04-05 00:49:12 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:49:15.156237 | orchestrator | 2026-04-05 00:49:15 | INFO  | Task bcbda999-be68-4903-8396-551ca0f1c657 is in state STARTED 2026-04-05 00:49:15.157843 | orchestrator | 2026-04-05 00:49:15 | INFO  | Task 7fddfc23-0ecf-4242-844b-31339d8bcb7f is in state STARTED 2026-04-05 00:49:15.160198 | orchestrator | 2026-04-05 00:49:15 | INFO  | Task 6dccdfcc-05bb-41c0-9938-f8e4b07aa37f is in state STARTED 2026-04-05 00:49:15.160582 | orchestrator | 2026-04-05 00:49:15 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:49:18.200197 | orchestrator | 2026-04-05 00:49:18 | INFO  | Task bcbda999-be68-4903-8396-551ca0f1c657 is in state STARTED 2026-04-05 00:49:18.201851 | orchestrator | 2026-04-05 00:49:18 | INFO  | Task 7fddfc23-0ecf-4242-844b-31339d8bcb7f is in state STARTED 2026-04-05 00:49:18.204404 | orchestrator | 2026-04-05 00:49:18 | INFO  | Task 6dccdfcc-05bb-41c0-9938-f8e4b07aa37f is in state STARTED 2026-04-05 00:49:18.204493 | orchestrator | 2026-04-05 00:49:18 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:49:21.256501 | orchestrator | 2026-04-05 00:49:21 | INFO  | Task bcbda999-be68-4903-8396-551ca0f1c657 is in state STARTED 2026-04-05 00:49:21.257730 | orchestrator | 2026-04-05 00:49:21 | INFO  | Task 7fddfc23-0ecf-4242-844b-31339d8bcb7f is in state STARTED 2026-04-05 00:49:21.259175 | orchestrator | 2026-04-05 00:49:21 | INFO  | Task 6dccdfcc-05bb-41c0-9938-f8e4b07aa37f is in state STARTED 2026-04-05 00:49:21.259262 | orchestrator | 2026-04-05 00:49:21 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:49:24.304164 | orchestrator | 2026-04-05 00:49:24 | INFO  | Task bcbda999-be68-4903-8396-551ca0f1c657 is in state STARTED 2026-04-05 00:49:24.306920 | orchestrator | 2026-04-05 00:49:24 | INFO  | Task 7fddfc23-0ecf-4242-844b-31339d8bcb7f is in state STARTED 2026-04-05 00:49:24.309168 | orchestrator | 2026-04-05 00:49:24 | INFO  | Task 6dccdfcc-05bb-41c0-9938-f8e4b07aa37f is in state STARTED 2026-04-05 00:49:24.309218 | orchestrator | 2026-04-05 00:49:24 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:49:27.346621 | orchestrator | 2026-04-05 00:49:27 | INFO  | Task f5b7cd76-97d5-4b77-ba4b-0e212a263270 is in state STARTED 2026-04-05 00:49:27.346879 | orchestrator | 2026-04-05 00:49:27 | INFO  | Task bcbda999-be68-4903-8396-551ca0f1c657 is in state STARTED 2026-04-05 00:49:27.347747 | orchestrator | 2026-04-05 00:49:27 | INFO  | Task 9e274abf-efbf-4dc5-9c1e-f4e636526ec7 is in state STARTED 2026-04-05 00:49:27.349175 | orchestrator | 2026-04-05 00:49:27 | INFO  | Task 7fddfc23-0ecf-4242-844b-31339d8bcb7f is in state STARTED 2026-04-05 00:49:27.353454 | orchestrator | 2026-04-05 00:49:27 | INFO  | Task 6dccdfcc-05bb-41c0-9938-f8e4b07aa37f is in state SUCCESS 2026-04-05 00:49:27.358410 | orchestrator | 2026-04-05 00:49:27.358485 | orchestrator | 2026-04-05 00:49:27.358499 | orchestrator | PLAY [Apply role common] ******************************************************* 2026-04-05 00:49:27.358545 | orchestrator | 2026-04-05 00:49:27.358558 | orchestrator | TASK [common : include_tasks] ************************************************** 2026-04-05 00:49:27.358569 | orchestrator | Sunday 05 April 2026 00:46:46 +0000 (0:00:00.336) 0:00:00.336 ********** 2026-04-05 00:49:27.358582 | orchestrator | included: /ansible/roles/common/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-05 00:49:27.358594 | orchestrator | 2026-04-05 00:49:27.358605 | orchestrator | TASK [common : Ensuring config directories exist] ****************************** 2026-04-05 00:49:27.358616 | orchestrator | Sunday 05 April 2026 00:46:48 +0000 (0:00:01.400) 0:00:01.737 ********** 2026-04-05 00:49:27.358626 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'cron'}, 'cron']) 2026-04-05 00:49:27.358638 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'cron'}, 'cron']) 2026-04-05 00:49:27.358656 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'cron'}, 'cron']) 2026-04-05 00:49:27.358675 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-04-05 00:49:27.358692 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'cron'}, 'cron']) 2026-04-05 00:49:27.358710 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'cron'}, 'cron']) 2026-04-05 00:49:27.358727 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-04-05 00:49:27.358745 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-04-05 00:49:27.358762 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-04-05 00:49:27.358782 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'cron'}, 'cron']) 2026-04-05 00:49:27.358798 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-04-05 00:49:27.358816 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'cron'}, 'cron']) 2026-04-05 00:49:27.358834 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-04-05 00:49:27.358853 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-04-05 00:49:27.358873 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-04-05 00:49:27.358891 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-04-05 00:49:27.358910 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-04-05 00:49:27.358929 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-04-05 00:49:27.358948 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-04-05 00:49:27.358968 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-04-05 00:49:27.358985 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-04-05 00:49:27.359005 | orchestrator | 2026-04-05 00:49:27.359024 | orchestrator | TASK [common : include_tasks] ************************************************** 2026-04-05 00:49:27.359043 | orchestrator | Sunday 05 April 2026 00:46:52 +0000 (0:00:04.143) 0:00:05.881 ********** 2026-04-05 00:49:27.359062 | orchestrator | included: /ansible/roles/common/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-05 00:49:27.359084 | orchestrator | 2026-04-05 00:49:27.359104 | orchestrator | TASK [service-cert-copy : common | Copying over extra CA certificates] ********* 2026-04-05 00:49:27.359150 | orchestrator | Sunday 05 April 2026 00:46:54 +0000 (0:00:01.623) 0:00:07.504 ********** 2026-04-05 00:49:27.359197 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-05 00:49:27.359410 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-05 00:49:27.359467 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-05 00:49:27.359489 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-05 00:49:27.359539 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-05 00:49:27.359560 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-05 00:49:27.359581 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-05 00:49:27.359601 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 00:49:27.359646 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 00:49:27.359698 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 00:49:27.359720 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 00:49:27.359740 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 00:49:27.359760 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 00:49:27.359779 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 00:49:27.359813 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 00:49:27.359849 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 00:49:27.359869 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 00:49:27.359900 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 00:49:27.359955 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 00:49:27.359970 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 00:49:27.359981 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 00:49:27.359992 | orchestrator | 2026-04-05 00:49:27.360004 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS certificate] *** 2026-04-05 00:49:27.360015 | orchestrator | Sunday 05 April 2026 00:46:59 +0000 (0:00:05.837) 0:00:13.342 ********** 2026-04-05 00:49:27.360027 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-05 00:49:27.360048 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 00:49:27.360060 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 00:49:27.360071 | orchestrator | skipping: [testbed-manager] 2026-04-05 00:49:27.360083 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-05 00:49:27.360107 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 00:49:27.360120 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 00:49:27.360131 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:49:27.360142 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-05 00:49:27.360153 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 00:49:27.360171 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 00:49:27.360183 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-05 00:49:27.360194 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 00:49:27.360205 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 00:49:27.360216 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:49:27.360246 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-05 00:49:27.360258 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 00:49:27.360270 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 00:49:27.360281 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:49:27.360292 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:49:27.360303 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-05 00:49:27.360321 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 00:49:27.360332 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 00:49:27.360343 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:49:27.360354 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-05 00:49:27.360376 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 00:49:27.360388 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 00:49:27.360400 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:49:27.360411 | orchestrator | 2026-04-05 00:49:27.360422 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS key] ****** 2026-04-05 00:49:27.360433 | orchestrator | Sunday 05 April 2026 00:47:02 +0000 (0:00:02.708) 0:00:16.050 ********** 2026-04-05 00:49:27.360444 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-05 00:49:27.360461 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 00:49:27.360473 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 00:49:27.360484 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:49:27.360495 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-05 00:49:27.360536 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 00:49:27.360549 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 00:49:27.360560 | orchestrator | skipping: [testbed-manager] 2026-04-05 00:49:27.360590 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-05 00:49:27.360602 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 00:49:27.360621 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 00:49:27.360632 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-05 00:49:27.360643 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:49:27.360655 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 00:49:27.360666 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 00:49:27.360677 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:49:27.360693 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-05 00:49:27.361932 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 00:49:27.361973 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 00:49:27.361985 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:49:27.361997 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-05 00:49:27.362056 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 00:49:27.362071 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 00:49:27.362082 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:49:27.362093 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-05 00:49:27.362105 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 00:49:27.362116 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 00:49:27.362127 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:49:27.362138 | orchestrator | 2026-04-05 00:49:27.362149 | orchestrator | TASK [common : Copying over /run subdirectories conf] ************************** 2026-04-05 00:49:27.362160 | orchestrator | Sunday 05 April 2026 00:47:05 +0000 (0:00:02.728) 0:00:18.779 ********** 2026-04-05 00:49:27.362171 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:49:27.362182 | orchestrator | skipping: [testbed-manager] 2026-04-05 00:49:27.362193 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:49:27.362212 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:49:27.362223 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:49:27.362244 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:49:27.362256 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:49:27.362267 | orchestrator | 2026-04-05 00:49:27.362278 | orchestrator | TASK [common : Restart systemd-tmpfiles] *************************************** 2026-04-05 00:49:27.362294 | orchestrator | Sunday 05 April 2026 00:47:06 +0000 (0:00:01.049) 0:00:19.828 ********** 2026-04-05 00:49:27.362305 | orchestrator | skipping: [testbed-manager] 2026-04-05 00:49:27.362316 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:49:27.362326 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:49:27.362337 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:49:27.362347 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:49:27.362358 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:49:27.362369 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:49:27.362380 | orchestrator | 2026-04-05 00:49:27.362390 | orchestrator | TASK [common : Copying over config.json files for services] ******************** 2026-04-05 00:49:27.362401 | orchestrator | Sunday 05 April 2026 00:47:08 +0000 (0:00:01.679) 0:00:21.508 ********** 2026-04-05 00:49:27.362413 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-05 00:49:27.362424 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-05 00:49:27.362435 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 00:49:27.362447 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-05 00:49:27.362458 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-05 00:49:27.362469 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-05 00:49:27.362495 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 00:49:27.362544 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 00:49:27.362565 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-05 00:49:27.362586 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 00:49:27.362645 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-05 00:49:27.362666 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 00:49:27.362696 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 00:49:27.362747 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 00:49:27.362769 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 00:49:27.362789 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 00:49:27.362808 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 00:49:27.362827 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 00:49:27.362847 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 00:49:27.362865 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 00:49:27.362884 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 00:49:27.362913 | orchestrator | 2026-04-05 00:49:27.362933 | orchestrator | TASK [common : Find custom fluentd input config files] ************************* 2026-04-05 00:49:27.362951 | orchestrator | Sunday 05 April 2026 00:47:16 +0000 (0:00:08.065) 0:00:29.573 ********** 2026-04-05 00:49:27.362970 | orchestrator | [WARNING]: Skipped 2026-04-05 00:49:27.362992 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' path due 2026-04-05 00:49:27.363011 | orchestrator | to this access issue: 2026-04-05 00:49:27.363029 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' is not a 2026-04-05 00:49:27.363048 | orchestrator | directory 2026-04-05 00:49:27.363066 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-05 00:49:27.363084 | orchestrator | 2026-04-05 00:49:27.363101 | orchestrator | TASK [common : Find custom fluentd filter config files] ************************ 2026-04-05 00:49:27.363119 | orchestrator | Sunday 05 April 2026 00:47:17 +0000 (0:00:01.494) 0:00:31.068 ********** 2026-04-05 00:49:27.363138 | orchestrator | [WARNING]: Skipped 2026-04-05 00:49:27.363164 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' path due 2026-04-05 00:49:27.363193 | orchestrator | to this access issue: 2026-04-05 00:49:27.363212 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' is not a 2026-04-05 00:49:27.363230 | orchestrator | directory 2026-04-05 00:49:27.363250 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-05 00:49:27.363267 | orchestrator | 2026-04-05 00:49:27.363285 | orchestrator | TASK [common : Find custom fluentd format config files] ************************ 2026-04-05 00:49:27.363303 | orchestrator | Sunday 05 April 2026 00:47:18 +0000 (0:00:01.236) 0:00:32.304 ********** 2026-04-05 00:49:27.363320 | orchestrator | [WARNING]: Skipped 2026-04-05 00:49:27.363340 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' path due 2026-04-05 00:49:27.363357 | orchestrator | to this access issue: 2026-04-05 00:49:27.363377 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' is not a 2026-04-05 00:49:27.363395 | orchestrator | directory 2026-04-05 00:49:27.363414 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-05 00:49:27.363432 | orchestrator | 2026-04-05 00:49:27.363451 | orchestrator | TASK [common : Find custom fluentd output config files] ************************ 2026-04-05 00:49:27.363470 | orchestrator | Sunday 05 April 2026 00:47:20 +0000 (0:00:01.928) 0:00:34.233 ********** 2026-04-05 00:49:27.363489 | orchestrator | [WARNING]: Skipped 2026-04-05 00:49:27.363553 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' path due 2026-04-05 00:49:27.363575 | orchestrator | to this access issue: 2026-04-05 00:49:27.363594 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' is not a 2026-04-05 00:49:27.363614 | orchestrator | directory 2026-04-05 00:49:27.363632 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-05 00:49:27.363650 | orchestrator | 2026-04-05 00:49:27.363667 | orchestrator | TASK [common : Copying over fluentd.conf] ************************************** 2026-04-05 00:49:27.363685 | orchestrator | Sunday 05 April 2026 00:47:22 +0000 (0:00:01.228) 0:00:35.461 ********** 2026-04-05 00:49:27.363703 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:49:27.363721 | orchestrator | changed: [testbed-manager] 2026-04-05 00:49:27.363739 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:49:27.363757 | orchestrator | changed: [testbed-node-5] 2026-04-05 00:49:27.363774 | orchestrator | changed: [testbed-node-4] 2026-04-05 00:49:27.363793 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:49:27.363810 | orchestrator | changed: [testbed-node-3] 2026-04-05 00:49:27.363828 | orchestrator | 2026-04-05 00:49:27.363846 | orchestrator | TASK [common : Copying over cron logrotate config file] ************************ 2026-04-05 00:49:27.363864 | orchestrator | Sunday 05 April 2026 00:47:28 +0000 (0:00:06.104) 0:00:41.566 ********** 2026-04-05 00:49:27.363882 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-04-05 00:49:27.363918 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-04-05 00:49:27.363936 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-04-05 00:49:27.363953 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-04-05 00:49:27.363971 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-04-05 00:49:27.363989 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-04-05 00:49:27.364006 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-04-05 00:49:27.364023 | orchestrator | 2026-04-05 00:49:27.364041 | orchestrator | TASK [common : Ensure RabbitMQ Erlang cookie exists] *************************** 2026-04-05 00:49:27.364059 | orchestrator | Sunday 05 April 2026 00:47:33 +0000 (0:00:05.759) 0:00:47.325 ********** 2026-04-05 00:49:27.364077 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:49:27.364095 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:49:27.364113 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:49:27.364131 | orchestrator | changed: [testbed-node-3] 2026-04-05 00:49:27.364148 | orchestrator | changed: [testbed-node-4] 2026-04-05 00:49:27.364166 | orchestrator | changed: [testbed-node-5] 2026-04-05 00:49:27.364184 | orchestrator | changed: [testbed-manager] 2026-04-05 00:49:27.364202 | orchestrator | 2026-04-05 00:49:27.364220 | orchestrator | TASK [common : Ensuring config directories have correct owner and permission] *** 2026-04-05 00:49:27.364237 | orchestrator | Sunday 05 April 2026 00:47:36 +0000 (0:00:02.545) 0:00:49.871 ********** 2026-04-05 00:49:27.364257 | orchestrator | ok: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-05 00:49:27.364296 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 00:49:27.364316 | orchestrator | ok: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-05 00:49:27.364335 | orchestrator | ok: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-05 00:49:27.364363 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 00:49:27.364382 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 00:49:27.364401 | orchestrator | ok: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 00:49:27.364422 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 00:49:27.364439 | orchestrator | ok: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 00:49:27.364465 | orchestrator | ok: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-05 00:49:27.364497 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 00:49:27.364544 | orchestrator | ok: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 00:49:27.364574 | orchestrator | ok: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-05 00:49:27.364594 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 00:49:27.364613 | orchestrator | ok: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 00:49:27.364632 | orchestrator | ok: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-05 00:49:27.364652 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 00:49:27.364688 | orchestrator | ok: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-05 00:49:27.364701 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 00:49:27.364720 | orchestrator | ok: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 00:49:27.364732 | orchestrator | ok: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 00:49:27.364743 | orchestrator | 2026-04-05 00:49:27.364755 | orchestrator | TASK [common : Copy rabbitmq-env.conf to kolla toolbox] ************************ 2026-04-05 00:49:27.364766 | orchestrator | Sunday 05 April 2026 00:47:41 +0000 (0:00:04.905) 0:00:54.776 ********** 2026-04-05 00:49:27.364777 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-04-05 00:49:27.364788 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-04-05 00:49:27.364799 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-04-05 00:49:27.364809 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-04-05 00:49:27.364820 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-04-05 00:49:27.364831 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-04-05 00:49:27.364841 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-04-05 00:49:27.364852 | orchestrator | 2026-04-05 00:49:27.364862 | orchestrator | TASK [common : Copy rabbitmq erl_inetrc to kolla toolbox] ********************** 2026-04-05 00:49:27.364873 | orchestrator | Sunday 05 April 2026 00:47:44 +0000 (0:00:03.417) 0:00:58.193 ********** 2026-04-05 00:49:27.364884 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-04-05 00:49:27.364895 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-04-05 00:49:27.364906 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-04-05 00:49:27.364916 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-04-05 00:49:27.364927 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-04-05 00:49:27.364938 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-04-05 00:49:27.364948 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-04-05 00:49:27.364959 | orchestrator | 2026-04-05 00:49:27.364970 | orchestrator | TASK [common : Check common containers] **************************************** 2026-04-05 00:49:27.364980 | orchestrator | Sunday 05 April 2026 00:47:47 +0000 (0:00:02.934) 0:01:01.128 ********** 2026-04-05 00:49:27.364996 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-05 00:49:27.365039 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-05 00:49:27.365071 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-05 00:49:27.365090 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-05 00:49:27.365110 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 00:49:27.365128 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-05 00:49:27.365148 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 00:49:27.365168 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 00:49:27.365203 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-05 00:49:27.365234 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 00:49:27.365252 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 00:49:27.365270 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 00:49:27.365289 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-05 00:49:27.365308 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 00:49:27.365325 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 00:49:27.365343 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 00:49:27.365399 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 00:49:27.365420 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 00:49:27.365437 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 00:49:27.365456 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 00:49:27.365475 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 00:49:27.365493 | orchestrator | 2026-04-05 00:49:27.365544 | orchestrator | TASK [common : Creating log volume] ******************************************** 2026-04-05 00:49:27.365565 | orchestrator | Sunday 05 April 2026 00:47:51 +0000 (0:00:03.518) 0:01:04.646 ********** 2026-04-05 00:49:27.365586 | orchestrator | changed: [testbed-manager] 2026-04-05 00:49:27.365606 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:49:27.365625 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:49:27.365645 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:49:27.365665 | orchestrator | changed: [testbed-node-3] 2026-04-05 00:49:27.365682 | orchestrator | changed: [testbed-node-4] 2026-04-05 00:49:27.365701 | orchestrator | changed: [testbed-node-5] 2026-04-05 00:49:27.365747 | orchestrator | 2026-04-05 00:49:27.365765 | orchestrator | TASK [common : Link kolla_logs volume to /var/log/kolla] *********************** 2026-04-05 00:49:27.365783 | orchestrator | Sunday 05 April 2026 00:47:53 +0000 (0:00:01.792) 0:01:06.439 ********** 2026-04-05 00:49:27.365801 | orchestrator | changed: [testbed-manager] 2026-04-05 00:49:27.365818 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:49:27.365835 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:49:27.365853 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:49:27.365873 | orchestrator | changed: [testbed-node-3] 2026-04-05 00:49:27.365890 | orchestrator | changed: [testbed-node-4] 2026-04-05 00:49:27.365909 | orchestrator | changed: [testbed-node-5] 2026-04-05 00:49:27.365925 | orchestrator | 2026-04-05 00:49:27.365943 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-04-05 00:49:27.365976 | orchestrator | Sunday 05 April 2026 00:47:54 +0000 (0:00:01.557) 0:01:07.997 ********** 2026-04-05 00:49:27.365993 | orchestrator | 2026-04-05 00:49:27.366012 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-04-05 00:49:27.366090 | orchestrator | Sunday 05 April 2026 00:47:54 +0000 (0:00:00.071) 0:01:08.068 ********** 2026-04-05 00:49:27.366111 | orchestrator | 2026-04-05 00:49:27.366133 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-04-05 00:49:27.366154 | orchestrator | Sunday 05 April 2026 00:47:54 +0000 (0:00:00.064) 0:01:08.132 ********** 2026-04-05 00:49:27.366174 | orchestrator | 2026-04-05 00:49:27.366195 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-04-05 00:49:27.366213 | orchestrator | Sunday 05 April 2026 00:47:54 +0000 (0:00:00.065) 0:01:08.198 ********** 2026-04-05 00:49:27.366231 | orchestrator | 2026-04-05 00:49:27.366251 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-04-05 00:49:27.366270 | orchestrator | Sunday 05 April 2026 00:47:54 +0000 (0:00:00.066) 0:01:08.265 ********** 2026-04-05 00:49:27.366289 | orchestrator | 2026-04-05 00:49:27.366309 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-04-05 00:49:27.366326 | orchestrator | Sunday 05 April 2026 00:47:54 +0000 (0:00:00.064) 0:01:08.329 ********** 2026-04-05 00:49:27.366343 | orchestrator | 2026-04-05 00:49:27.366363 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-04-05 00:49:27.366383 | orchestrator | Sunday 05 April 2026 00:47:55 +0000 (0:00:00.085) 0:01:08.415 ********** 2026-04-05 00:49:27.366401 | orchestrator | 2026-04-05 00:49:27.366420 | orchestrator | RUNNING HANDLER [common : Restart fluentd container] *************************** 2026-04-05 00:49:27.366455 | orchestrator | Sunday 05 April 2026 00:47:55 +0000 (0:00:00.113) 0:01:08.528 ********** 2026-04-05 00:49:27.366475 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:49:27.366494 | orchestrator | changed: [testbed-manager] 2026-04-05 00:49:27.366578 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:49:27.366596 | orchestrator | changed: [testbed-node-5] 2026-04-05 00:49:27.366611 | orchestrator | changed: [testbed-node-4] 2026-04-05 00:49:27.366626 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:49:27.366642 | orchestrator | changed: [testbed-node-3] 2026-04-05 00:49:27.366659 | orchestrator | 2026-04-05 00:49:27.366675 | orchestrator | RUNNING HANDLER [common : Restart kolla-toolbox container] ********************* 2026-04-05 00:49:27.366691 | orchestrator | Sunday 05 April 2026 00:48:32 +0000 (0:00:36.897) 0:01:45.425 ********** 2026-04-05 00:49:27.366708 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:49:27.366723 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:49:27.366739 | orchestrator | changed: [testbed-manager] 2026-04-05 00:49:27.366755 | orchestrator | changed: [testbed-node-3] 2026-04-05 00:49:27.366772 | orchestrator | changed: [testbed-node-5] 2026-04-05 00:49:27.366788 | orchestrator | changed: [testbed-node-4] 2026-04-05 00:49:27.366804 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:49:27.366820 | orchestrator | 2026-04-05 00:49:27.366836 | orchestrator | RUNNING HANDLER [common : Initializing toolbox container using normal user] **** 2026-04-05 00:49:27.366853 | orchestrator | Sunday 05 April 2026 00:49:13 +0000 (0:00:41.420) 0:02:26.846 ********** 2026-04-05 00:49:27.366870 | orchestrator | ok: [testbed-manager] 2026-04-05 00:49:27.366890 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:49:27.366908 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:49:27.366924 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:49:27.366941 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:49:27.366956 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:49:27.366974 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:49:27.366992 | orchestrator | 2026-04-05 00:49:27.367009 | orchestrator | RUNNING HANDLER [common : Restart cron container] ****************************** 2026-04-05 00:49:27.367025 | orchestrator | Sunday 05 April 2026 00:49:15 +0000 (0:00:02.215) 0:02:29.061 ********** 2026-04-05 00:49:27.367042 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:49:27.367068 | orchestrator | changed: [testbed-manager] 2026-04-05 00:49:27.367078 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:49:27.367088 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:49:27.367097 | orchestrator | changed: [testbed-node-5] 2026-04-05 00:49:27.367107 | orchestrator | changed: [testbed-node-3] 2026-04-05 00:49:27.367116 | orchestrator | changed: [testbed-node-4] 2026-04-05 00:49:27.367126 | orchestrator | 2026-04-05 00:49:27.367135 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-05 00:49:27.367146 | orchestrator | testbed-manager : ok=22  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-04-05 00:49:27.367168 | orchestrator | testbed-node-0 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-04-05 00:49:27.367178 | orchestrator | testbed-node-1 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-04-05 00:49:27.367188 | orchestrator | testbed-node-2 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-04-05 00:49:27.367198 | orchestrator | testbed-node-3 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-04-05 00:49:27.367208 | orchestrator | testbed-node-4 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-04-05 00:49:27.367218 | orchestrator | testbed-node-5 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-04-05 00:49:27.367227 | orchestrator | 2026-04-05 00:49:27.367237 | orchestrator | 2026-04-05 00:49:27.367247 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-05 00:49:27.367257 | orchestrator | Sunday 05 April 2026 00:49:25 +0000 (0:00:09.835) 0:02:38.897 ********** 2026-04-05 00:49:27.367267 | orchestrator | =============================================================================== 2026-04-05 00:49:27.367277 | orchestrator | common : Restart kolla-toolbox container ------------------------------- 41.42s 2026-04-05 00:49:27.367286 | orchestrator | common : Restart fluentd container ------------------------------------- 36.90s 2026-04-05 00:49:27.367296 | orchestrator | common : Restart cron container ----------------------------------------- 9.84s 2026-04-05 00:49:27.367305 | orchestrator | common : Copying over config.json files for services -------------------- 8.07s 2026-04-05 00:49:27.367315 | orchestrator | common : Copying over fluentd.conf -------------------------------------- 6.10s 2026-04-05 00:49:27.367324 | orchestrator | service-cert-copy : common | Copying over extra CA certificates --------- 5.84s 2026-04-05 00:49:27.367333 | orchestrator | common : Copying over cron logrotate config file ------------------------ 5.76s 2026-04-05 00:49:27.367343 | orchestrator | common : Ensuring config directories have correct owner and permission --- 4.91s 2026-04-05 00:49:27.367352 | orchestrator | common : Ensuring config directories exist ------------------------------ 4.14s 2026-04-05 00:49:27.367362 | orchestrator | common : Check common containers ---------------------------------------- 3.52s 2026-04-05 00:49:27.367372 | orchestrator | common : Copy rabbitmq-env.conf to kolla toolbox ------------------------ 3.42s 2026-04-05 00:49:27.367382 | orchestrator | common : Copy rabbitmq erl_inetrc to kolla toolbox ---------------------- 2.93s 2026-04-05 00:49:27.367391 | orchestrator | service-cert-copy : common | Copying over backend internal TLS key ------ 2.73s 2026-04-05 00:49:27.367405 | orchestrator | service-cert-copy : common | Copying over backend internal TLS certificate --- 2.71s 2026-04-05 00:49:27.367428 | orchestrator | common : Ensure RabbitMQ Erlang cookie exists --------------------------- 2.55s 2026-04-05 00:49:27.367438 | orchestrator | common : Initializing toolbox container using normal user --------------- 2.22s 2026-04-05 00:49:27.367447 | orchestrator | common : Find custom fluentd format config files ------------------------ 1.93s 2026-04-05 00:49:27.367467 | orchestrator | common : Creating log volume -------------------------------------------- 1.79s 2026-04-05 00:49:27.367483 | orchestrator | common : Restart systemd-tmpfiles --------------------------------------- 1.68s 2026-04-05 00:49:27.367499 | orchestrator | common : include_tasks -------------------------------------------------- 1.62s 2026-04-05 00:49:27.367592 | orchestrator | 2026-04-05 00:49:27 | INFO  | Task 1bc5b308-4c5f-49b2-92a5-1a49ed90e82d is in state STARTED 2026-04-05 00:49:27.367611 | orchestrator | 2026-04-05 00:49:27 | INFO  | Task 09364260-f753-473e-b09d-6fd03ca9c40e is in state STARTED 2026-04-05 00:49:27.367625 | orchestrator | 2026-04-05 00:49:27 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:49:30.404800 | orchestrator | 2026-04-05 00:49:30 | INFO  | Task f5b7cd76-97d5-4b77-ba4b-0e212a263270 is in state STARTED 2026-04-05 00:49:30.404919 | orchestrator | 2026-04-05 00:49:30 | INFO  | Task bcbda999-be68-4903-8396-551ca0f1c657 is in state STARTED 2026-04-05 00:49:30.405404 | orchestrator | 2026-04-05 00:49:30 | INFO  | Task 9e274abf-efbf-4dc5-9c1e-f4e636526ec7 is in state STARTED 2026-04-05 00:49:30.406467 | orchestrator | 2026-04-05 00:49:30 | INFO  | Task 7fddfc23-0ecf-4242-844b-31339d8bcb7f is in state STARTED 2026-04-05 00:49:30.407006 | orchestrator | 2026-04-05 00:49:30 | INFO  | Task 1bc5b308-4c5f-49b2-92a5-1a49ed90e82d is in state STARTED 2026-04-05 00:49:30.408075 | orchestrator | 2026-04-05 00:49:30 | INFO  | Task 09364260-f753-473e-b09d-6fd03ca9c40e is in state STARTED 2026-04-05 00:49:30.408135 | orchestrator | 2026-04-05 00:49:30 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:49:33.441725 | orchestrator | 2026-04-05 00:49:33 | INFO  | Task f5b7cd76-97d5-4b77-ba4b-0e212a263270 is in state STARTED 2026-04-05 00:49:33.442544 | orchestrator | 2026-04-05 00:49:33 | INFO  | Task bcbda999-be68-4903-8396-551ca0f1c657 is in state STARTED 2026-04-05 00:49:33.443367 | orchestrator | 2026-04-05 00:49:33 | INFO  | Task 9e274abf-efbf-4dc5-9c1e-f4e636526ec7 is in state STARTED 2026-04-05 00:49:33.444610 | orchestrator | 2026-04-05 00:49:33 | INFO  | Task 7fddfc23-0ecf-4242-844b-31339d8bcb7f is in state STARTED 2026-04-05 00:49:33.445591 | orchestrator | 2026-04-05 00:49:33 | INFO  | Task 1bc5b308-4c5f-49b2-92a5-1a49ed90e82d is in state STARTED 2026-04-05 00:49:33.446600 | orchestrator | 2026-04-05 00:49:33 | INFO  | Task 09364260-f753-473e-b09d-6fd03ca9c40e is in state STARTED 2026-04-05 00:49:33.446624 | orchestrator | 2026-04-05 00:49:33 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:49:36.476661 | orchestrator | 2026-04-05 00:49:36 | INFO  | Task f5b7cd76-97d5-4b77-ba4b-0e212a263270 is in state STARTED 2026-04-05 00:49:36.477494 | orchestrator | 2026-04-05 00:49:36 | INFO  | Task bcbda999-be68-4903-8396-551ca0f1c657 is in state STARTED 2026-04-05 00:49:36.478370 | orchestrator | 2026-04-05 00:49:36 | INFO  | Task 9e274abf-efbf-4dc5-9c1e-f4e636526ec7 is in state STARTED 2026-04-05 00:49:36.479956 | orchestrator | 2026-04-05 00:49:36 | INFO  | Task 7fddfc23-0ecf-4242-844b-31339d8bcb7f is in state STARTED 2026-04-05 00:49:36.480605 | orchestrator | 2026-04-05 00:49:36 | INFO  | Task 1bc5b308-4c5f-49b2-92a5-1a49ed90e82d is in state STARTED 2026-04-05 00:49:36.481492 | orchestrator | 2026-04-05 00:49:36 | INFO  | Task 09364260-f753-473e-b09d-6fd03ca9c40e is in state STARTED 2026-04-05 00:49:36.481554 | orchestrator | 2026-04-05 00:49:36 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:49:39.628600 | orchestrator | 2026-04-05 00:49:39 | INFO  | Task f5b7cd76-97d5-4b77-ba4b-0e212a263270 is in state STARTED 2026-04-05 00:49:39.628711 | orchestrator | 2026-04-05 00:49:39 | INFO  | Task bcbda999-be68-4903-8396-551ca0f1c657 is in state STARTED 2026-04-05 00:49:39.628756 | orchestrator | 2026-04-05 00:49:39 | INFO  | Task 9e274abf-efbf-4dc5-9c1e-f4e636526ec7 is in state STARTED 2026-04-05 00:49:39.628769 | orchestrator | 2026-04-05 00:49:39 | INFO  | Task 7fddfc23-0ecf-4242-844b-31339d8bcb7f is in state STARTED 2026-04-05 00:49:39.628781 | orchestrator | 2026-04-05 00:49:39 | INFO  | Task 1bc5b308-4c5f-49b2-92a5-1a49ed90e82d is in state STARTED 2026-04-05 00:49:39.628791 | orchestrator | 2026-04-05 00:49:39 | INFO  | Task 09364260-f753-473e-b09d-6fd03ca9c40e is in state STARTED 2026-04-05 00:49:39.628819 | orchestrator | 2026-04-05 00:49:39 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:49:42.613801 | orchestrator | 2026-04-05 00:49:42 | INFO  | Task f5b7cd76-97d5-4b77-ba4b-0e212a263270 is in state STARTED 2026-04-05 00:49:42.615967 | orchestrator | 2026-04-05 00:49:42 | INFO  | Task bcbda999-be68-4903-8396-551ca0f1c657 is in state STARTED 2026-04-05 00:49:42.618646 | orchestrator | 2026-04-05 00:49:42 | INFO  | Task 9e274abf-efbf-4dc5-9c1e-f4e636526ec7 is in state STARTED 2026-04-05 00:49:42.621280 | orchestrator | 2026-04-05 00:49:42 | INFO  | Task 7fddfc23-0ecf-4242-844b-31339d8bcb7f is in state STARTED 2026-04-05 00:49:42.621804 | orchestrator | 2026-04-05 00:49:42 | INFO  | Task 1bc5b308-4c5f-49b2-92a5-1a49ed90e82d is in state STARTED 2026-04-05 00:49:42.623242 | orchestrator | 2026-04-05 00:49:42 | INFO  | Task 09364260-f753-473e-b09d-6fd03ca9c40e is in state STARTED 2026-04-05 00:49:42.623271 | orchestrator | 2026-04-05 00:49:42 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:49:45.682552 | orchestrator | 2026-04-05 00:49:45 | INFO  | Task f5b7cd76-97d5-4b77-ba4b-0e212a263270 is in state STARTED 2026-04-05 00:49:45.688379 | orchestrator | 2026-04-05 00:49:45 | INFO  | Task bcbda999-be68-4903-8396-551ca0f1c657 is in state STARTED 2026-04-05 00:49:45.694264 | orchestrator | 2026-04-05 00:49:45 | INFO  | Task 9e274abf-efbf-4dc5-9c1e-f4e636526ec7 is in state STARTED 2026-04-05 00:49:45.698120 | orchestrator | 2026-04-05 00:49:45 | INFO  | Task 7fddfc23-0ecf-4242-844b-31339d8bcb7f is in state STARTED 2026-04-05 00:49:45.703747 | orchestrator | 2026-04-05 00:49:45 | INFO  | Task 1bc5b308-4c5f-49b2-92a5-1a49ed90e82d is in state STARTED 2026-04-05 00:49:45.706068 | orchestrator | 2026-04-05 00:49:45 | INFO  | Task 09364260-f753-473e-b09d-6fd03ca9c40e is in state STARTED 2026-04-05 00:49:45.706147 | orchestrator | 2026-04-05 00:49:45 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:49:48.909967 | orchestrator | 2026-04-05 00:49:48 | INFO  | Task f5b7cd76-97d5-4b77-ba4b-0e212a263270 is in state STARTED 2026-04-05 00:49:48.910074 | orchestrator | 2026-04-05 00:49:48 | INFO  | Task bcbda999-be68-4903-8396-551ca0f1c657 is in state STARTED 2026-04-05 00:49:48.953062 | orchestrator | 2026-04-05 00:49:48 | INFO  | Task 9e274abf-efbf-4dc5-9c1e-f4e636526ec7 is in state STARTED 2026-04-05 00:49:48.953293 | orchestrator | 2026-04-05 00:49:48 | INFO  | Task 7fddfc23-0ecf-4242-844b-31339d8bcb7f is in state STARTED 2026-04-05 00:49:48.953339 | orchestrator | 2026-04-05 00:49:48 | INFO  | Task 1bc5b308-4c5f-49b2-92a5-1a49ed90e82d is in state STARTED 2026-04-05 00:49:48.954197 | orchestrator | 2026-04-05 00:49:48 | INFO  | Task 09364260-f753-473e-b09d-6fd03ca9c40e is in state STARTED 2026-04-05 00:49:48.954237 | orchestrator | 2026-04-05 00:49:48 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:49:52.293978 | orchestrator | 2026-04-05 00:49:52 | INFO  | Task f5b7cd76-97d5-4b77-ba4b-0e212a263270 is in state SUCCESS 2026-04-05 00:49:52.294202 | orchestrator | 2026-04-05 00:49:52 | INFO  | Task bcbda999-be68-4903-8396-551ca0f1c657 is in state STARTED 2026-04-05 00:49:52.295574 | orchestrator | 2026-04-05 00:49:52 | INFO  | Task 9e274abf-efbf-4dc5-9c1e-f4e636526ec7 is in state STARTED 2026-04-05 00:49:52.296565 | orchestrator | 2026-04-05 00:49:52 | INFO  | Task 7fddfc23-0ecf-4242-844b-31339d8bcb7f is in state STARTED 2026-04-05 00:49:52.297624 | orchestrator | 2026-04-05 00:49:52 | INFO  | Task 36e09ef4-3478-4758-b2f7-0ed2ea2542bd is in state STARTED 2026-04-05 00:49:52.304114 | orchestrator | 2026-04-05 00:49:52 | INFO  | Task 1bc5b308-4c5f-49b2-92a5-1a49ed90e82d is in state STARTED 2026-04-05 00:49:52.304208 | orchestrator | 2026-04-05 00:49:52 | INFO  | Task 09364260-f753-473e-b09d-6fd03ca9c40e is in state STARTED 2026-04-05 00:49:52.304224 | orchestrator | 2026-04-05 00:49:52 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:49:55.342159 | orchestrator | 2026-04-05 00:49:55 | INFO  | Task bcbda999-be68-4903-8396-551ca0f1c657 is in state STARTED 2026-04-05 00:49:55.342217 | orchestrator | 2026-04-05 00:49:55 | INFO  | Task 9e274abf-efbf-4dc5-9c1e-f4e636526ec7 is in state STARTED 2026-04-05 00:49:55.343332 | orchestrator | 2026-04-05 00:49:55 | INFO  | Task 7fddfc23-0ecf-4242-844b-31339d8bcb7f is in state STARTED 2026-04-05 00:49:55.343357 | orchestrator | 2026-04-05 00:49:55 | INFO  | Task 36e09ef4-3478-4758-b2f7-0ed2ea2542bd is in state STARTED 2026-04-05 00:49:55.343363 | orchestrator | 2026-04-05 00:49:55 | INFO  | Task 1bc5b308-4c5f-49b2-92a5-1a49ed90e82d is in state STARTED 2026-04-05 00:49:55.344220 | orchestrator | 2026-04-05 00:49:55 | INFO  | Task 09364260-f753-473e-b09d-6fd03ca9c40e is in state SUCCESS 2026-04-05 00:49:55.344240 | orchestrator | 2026-04-05 00:49:55.344278 | orchestrator | 2026-04-05 00:49:55.344285 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-05 00:49:55.344292 | orchestrator | 2026-04-05 00:49:55.344298 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-05 00:49:55.344304 | orchestrator | Sunday 05 April 2026 00:49:31 +0000 (0:00:00.422) 0:00:00.422 ********** 2026-04-05 00:49:55.344310 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:49:55.344317 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:49:55.344323 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:49:55.344329 | orchestrator | 2026-04-05 00:49:55.344336 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-05 00:49:55.344342 | orchestrator | Sunday 05 April 2026 00:49:32 +0000 (0:00:00.717) 0:00:01.139 ********** 2026-04-05 00:49:55.344348 | orchestrator | ok: [testbed-node-0] => (item=enable_memcached_True) 2026-04-05 00:49:55.344354 | orchestrator | ok: [testbed-node-1] => (item=enable_memcached_True) 2026-04-05 00:49:55.344361 | orchestrator | ok: [testbed-node-2] => (item=enable_memcached_True) 2026-04-05 00:49:55.344367 | orchestrator | 2026-04-05 00:49:55.344374 | orchestrator | PLAY [Apply role memcached] **************************************************** 2026-04-05 00:49:55.344380 | orchestrator | 2026-04-05 00:49:55.344387 | orchestrator | TASK [memcached : include_tasks] *********************************************** 2026-04-05 00:49:55.344393 | orchestrator | Sunday 05 April 2026 00:49:32 +0000 (0:00:00.502) 0:00:01.642 ********** 2026-04-05 00:49:55.344400 | orchestrator | included: /ansible/roles/memcached/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 00:49:55.344407 | orchestrator | 2026-04-05 00:49:55.344414 | orchestrator | TASK [memcached : Ensuring config directories exist] *************************** 2026-04-05 00:49:55.344420 | orchestrator | Sunday 05 April 2026 00:49:33 +0000 (0:00:00.643) 0:00:02.286 ********** 2026-04-05 00:49:55.344427 | orchestrator | changed: [testbed-node-1] => (item=memcached) 2026-04-05 00:49:55.344433 | orchestrator | changed: [testbed-node-2] => (item=memcached) 2026-04-05 00:49:55.344440 | orchestrator | changed: [testbed-node-0] => (item=memcached) 2026-04-05 00:49:55.344446 | orchestrator | 2026-04-05 00:49:55.344454 | orchestrator | TASK [memcached : Copying over config.json files for services] ***************** 2026-04-05 00:49:55.344476 | orchestrator | Sunday 05 April 2026 00:49:35 +0000 (0:00:01.956) 0:00:04.242 ********** 2026-04-05 00:49:55.344482 | orchestrator | changed: [testbed-node-1] => (item=memcached) 2026-04-05 00:49:55.344489 | orchestrator | changed: [testbed-node-0] => (item=memcached) 2026-04-05 00:49:55.344496 | orchestrator | changed: [testbed-node-2] => (item=memcached) 2026-04-05 00:49:55.344528 | orchestrator | 2026-04-05 00:49:55.344566 | orchestrator | TASK [memcached : Check memcached container] *********************************** 2026-04-05 00:49:55.344573 | orchestrator | Sunday 05 April 2026 00:49:37 +0000 (0:00:01.956) 0:00:06.199 ********** 2026-04-05 00:49:55.344579 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:49:55.344586 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:49:55.344620 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:49:55.344627 | orchestrator | 2026-04-05 00:49:55.344634 | orchestrator | RUNNING HANDLER [memcached : Restart memcached container] ********************** 2026-04-05 00:49:55.344640 | orchestrator | Sunday 05 April 2026 00:49:39 +0000 (0:00:02.622) 0:00:08.821 ********** 2026-04-05 00:49:55.344646 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:49:55.344653 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:49:55.344659 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:49:55.344700 | orchestrator | 2026-04-05 00:49:55.344708 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-05 00:49:55.344715 | orchestrator | testbed-node-0 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-05 00:49:55.344722 | orchestrator | testbed-node-1 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-05 00:49:55.344728 | orchestrator | testbed-node-2 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-05 00:49:55.344734 | orchestrator | 2026-04-05 00:49:55.344741 | orchestrator | 2026-04-05 00:49:55.344747 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-05 00:49:55.344752 | orchestrator | Sunday 05 April 2026 00:49:49 +0000 (0:00:09.172) 0:00:17.993 ********** 2026-04-05 00:49:55.344758 | orchestrator | =============================================================================== 2026-04-05 00:49:55.344798 | orchestrator | memcached : Restart memcached container --------------------------------- 9.17s 2026-04-05 00:49:55.344805 | orchestrator | memcached : Check memcached container ----------------------------------- 2.62s 2026-04-05 00:49:55.344811 | orchestrator | memcached : Copying over config.json files for services ----------------- 1.96s 2026-04-05 00:49:55.344816 | orchestrator | memcached : Ensuring config directories exist --------------------------- 1.96s 2026-04-05 00:49:55.344822 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.72s 2026-04-05 00:49:55.344828 | orchestrator | memcached : include_tasks ----------------------------------------------- 0.64s 2026-04-05 00:49:55.344833 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.50s 2026-04-05 00:49:55.344839 | orchestrator | 2026-04-05 00:49:55.345046 | orchestrator | 2026-04-05 00:49:55.345060 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-05 00:49:55.345067 | orchestrator | 2026-04-05 00:49:55.345074 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-05 00:49:55.345080 | orchestrator | Sunday 05 April 2026 00:49:31 +0000 (0:00:00.498) 0:00:00.498 ********** 2026-04-05 00:49:55.345088 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:49:55.345095 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:49:55.345102 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:49:55.345107 | orchestrator | 2026-04-05 00:49:55.345119 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-05 00:49:55.345126 | orchestrator | Sunday 05 April 2026 00:49:31 +0000 (0:00:00.480) 0:00:00.979 ********** 2026-04-05 00:49:55.345132 | orchestrator | ok: [testbed-node-0] => (item=enable_redis_True) 2026-04-05 00:49:55.345139 | orchestrator | ok: [testbed-node-1] => (item=enable_redis_True) 2026-04-05 00:49:55.345153 | orchestrator | ok: [testbed-node-2] => (item=enable_redis_True) 2026-04-05 00:49:55.345161 | orchestrator | 2026-04-05 00:49:55.345168 | orchestrator | PLAY [Apply role redis] ******************************************************** 2026-04-05 00:49:55.345174 | orchestrator | 2026-04-05 00:49:55.345179 | orchestrator | TASK [redis : include_tasks] *************************************************** 2026-04-05 00:49:55.345186 | orchestrator | Sunday 05 April 2026 00:49:32 +0000 (0:00:00.634) 0:00:01.614 ********** 2026-04-05 00:49:55.345192 | orchestrator | included: /ansible/roles/redis/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 00:49:55.345199 | orchestrator | 2026-04-05 00:49:55.345206 | orchestrator | TASK [redis : Ensuring config directories exist] ******************************* 2026-04-05 00:49:55.345213 | orchestrator | Sunday 05 April 2026 00:49:33 +0000 (0:00:01.044) 0:00:02.658 ********** 2026-04-05 00:49:55.345221 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-04-05 00:49:55.345231 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-04-05 00:49:55.345238 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-04-05 00:49:55.345246 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-04-05 00:49:55.345260 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-04-05 00:49:55.345274 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-04-05 00:49:55.345281 | orchestrator | 2026-04-05 00:49:55.345287 | orchestrator | TASK [redis : Copying over default config.json files] ************************** 2026-04-05 00:49:55.345293 | orchestrator | Sunday 05 April 2026 00:49:35 +0000 (0:00:01.992) 0:00:04.651 ********** 2026-04-05 00:49:55.345299 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-04-05 00:49:55.345305 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-04-05 00:49:55.345311 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-04-05 00:49:55.345317 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-04-05 00:49:55.345323 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-04-05 00:49:55.345341 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-04-05 00:49:55.345348 | orchestrator | 2026-04-05 00:49:55.345355 | orchestrator | TASK [redis : Copying over redis config files] ********************************* 2026-04-05 00:49:55.345361 | orchestrator | Sunday 05 April 2026 00:49:38 +0000 (0:00:02.898) 0:00:07.550 ********** 2026-04-05 00:49:55.345368 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-04-05 00:49:55.345375 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-04-05 00:49:55.345382 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-04-05 00:49:55.345389 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-04-05 00:49:55.345395 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-04-05 00:49:55.345418 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-04-05 00:49:55.345426 | orchestrator | 2026-04-05 00:49:55.345433 | orchestrator | TASK [redis : Check redis containers] ****************************************** 2026-04-05 00:49:55.345439 | orchestrator | Sunday 05 April 2026 00:49:41 +0000 (0:00:03.677) 0:00:11.228 ********** 2026-04-05 00:49:55.345445 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-04-05 00:49:55.345451 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-04-05 00:49:55.345458 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-04-05 00:49:55.345464 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-04-05 00:49:55.345471 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-04-05 00:49:55.345487 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-04-05 00:49:55.345494 | orchestrator | 2026-04-05 00:49:55.345539 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2026-04-05 00:49:55.345549 | orchestrator | Sunday 05 April 2026 00:49:43 +0000 (0:00:01.888) 0:00:13.116 ********** 2026-04-05 00:49:55.345556 | orchestrator | 2026-04-05 00:49:55.345566 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2026-04-05 00:49:55.345573 | orchestrator | Sunday 05 April 2026 00:49:44 +0000 (0:00:00.311) 0:00:13.427 ********** 2026-04-05 00:49:55.345578 | orchestrator | 2026-04-05 00:49:55.345585 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2026-04-05 00:49:55.345591 | orchestrator | Sunday 05 April 2026 00:49:44 +0000 (0:00:00.115) 0:00:13.542 ********** 2026-04-05 00:49:55.345597 | orchestrator | 2026-04-05 00:49:55.345604 | orchestrator | RUNNING HANDLER [redis : Restart redis container] ****************************** 2026-04-05 00:49:55.345611 | orchestrator | Sunday 05 April 2026 00:49:44 +0000 (0:00:00.110) 0:00:13.652 ********** 2026-04-05 00:49:55.345618 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:49:55.345624 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:49:55.345631 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:49:55.345638 | orchestrator | 2026-04-05 00:49:55.345645 | orchestrator | RUNNING HANDLER [redis : Restart redis-sentinel container] ********************* 2026-04-05 00:49:55.345650 | orchestrator | Sunday 05 April 2026 00:49:48 +0000 (0:00:04.389) 0:00:18.042 ********** 2026-04-05 00:49:55.345656 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:49:55.345662 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:49:55.345668 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:49:55.345674 | orchestrator | 2026-04-05 00:49:55.345680 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-05 00:49:55.345687 | orchestrator | testbed-node-0 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-05 00:49:55.345694 | orchestrator | testbed-node-1 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-05 00:49:55.345700 | orchestrator | testbed-node-2 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-05 00:49:55.345705 | orchestrator | 2026-04-05 00:49:55.345711 | orchestrator | 2026-04-05 00:49:55.345718 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-05 00:49:55.345725 | orchestrator | Sunday 05 April 2026 00:49:53 +0000 (0:00:04.981) 0:00:23.023 ********** 2026-04-05 00:49:55.345732 | orchestrator | =============================================================================== 2026-04-05 00:49:55.345739 | orchestrator | redis : Restart redis-sentinel container -------------------------------- 4.98s 2026-04-05 00:49:55.345746 | orchestrator | redis : Restart redis container ----------------------------------------- 4.39s 2026-04-05 00:49:55.345753 | orchestrator | redis : Copying over redis config files --------------------------------- 3.68s 2026-04-05 00:49:55.345761 | orchestrator | redis : Copying over default config.json files -------------------------- 2.90s 2026-04-05 00:49:55.345768 | orchestrator | redis : Ensuring config directories exist ------------------------------- 1.99s 2026-04-05 00:49:55.345781 | orchestrator | redis : Check redis containers ------------------------------------------ 1.89s 2026-04-05 00:49:55.345788 | orchestrator | redis : include_tasks --------------------------------------------------- 1.04s 2026-04-05 00:49:55.345795 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.63s 2026-04-05 00:49:55.345802 | orchestrator | redis : Flush handlers -------------------------------------------------- 0.54s 2026-04-05 00:49:55.345810 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.48s 2026-04-05 00:49:55.345817 | orchestrator | 2026-04-05 00:49:55 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:49:58.409584 | orchestrator | 2026-04-05 00:49:58 | INFO  | Task bcbda999-be68-4903-8396-551ca0f1c657 is in state STARTED 2026-04-05 00:49:58.409662 | orchestrator | 2026-04-05 00:49:58 | INFO  | Task 9e274abf-efbf-4dc5-9c1e-f4e636526ec7 is in state STARTED 2026-04-05 00:49:58.409671 | orchestrator | 2026-04-05 00:49:58 | INFO  | Task 7fddfc23-0ecf-4242-844b-31339d8bcb7f is in state STARTED 2026-04-05 00:49:58.409677 | orchestrator | 2026-04-05 00:49:58 | INFO  | Task 36e09ef4-3478-4758-b2f7-0ed2ea2542bd is in state STARTED 2026-04-05 00:49:58.416525 | orchestrator | 2026-04-05 00:49:58 | INFO  | Task 1bc5b308-4c5f-49b2-92a5-1a49ed90e82d is in state STARTED 2026-04-05 00:49:58.416574 | orchestrator | 2026-04-05 00:49:58 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:50:01.469982 | orchestrator | 2026-04-05 00:50:01 | INFO  | Task bcbda999-be68-4903-8396-551ca0f1c657 is in state STARTED 2026-04-05 00:50:01.477864 | orchestrator | 2026-04-05 00:50:01 | INFO  | Task 9e274abf-efbf-4dc5-9c1e-f4e636526ec7 is in state STARTED 2026-04-05 00:50:01.480470 | orchestrator | 2026-04-05 00:50:01 | INFO  | Task 7fddfc23-0ecf-4242-844b-31339d8bcb7f is in state STARTED 2026-04-05 00:50:01.483542 | orchestrator | 2026-04-05 00:50:01 | INFO  | Task 36e09ef4-3478-4758-b2f7-0ed2ea2542bd is in state STARTED 2026-04-05 00:50:01.484178 | orchestrator | 2026-04-05 00:50:01 | INFO  | Task 1bc5b308-4c5f-49b2-92a5-1a49ed90e82d is in state STARTED 2026-04-05 00:50:01.484203 | orchestrator | 2026-04-05 00:50:01 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:50:04.592318 | orchestrator | 2026-04-05 00:50:04 | INFO  | Task bcbda999-be68-4903-8396-551ca0f1c657 is in state STARTED 2026-04-05 00:50:04.592472 | orchestrator | 2026-04-05 00:50:04 | INFO  | Task 9e274abf-efbf-4dc5-9c1e-f4e636526ec7 is in state STARTED 2026-04-05 00:50:04.593254 | orchestrator | 2026-04-05 00:50:04 | INFO  | Task 7fddfc23-0ecf-4242-844b-31339d8bcb7f is in state STARTED 2026-04-05 00:50:04.593938 | orchestrator | 2026-04-05 00:50:04 | INFO  | Task 36e09ef4-3478-4758-b2f7-0ed2ea2542bd is in state STARTED 2026-04-05 00:50:04.594698 | orchestrator | 2026-04-05 00:50:04 | INFO  | Task 1bc5b308-4c5f-49b2-92a5-1a49ed90e82d is in state STARTED 2026-04-05 00:50:04.594735 | orchestrator | 2026-04-05 00:50:04 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:50:07.661062 | orchestrator | 2026-04-05 00:50:07 | INFO  | Task bcbda999-be68-4903-8396-551ca0f1c657 is in state STARTED 2026-04-05 00:50:07.664011 | orchestrator | 2026-04-05 00:50:07 | INFO  | Task 9e274abf-efbf-4dc5-9c1e-f4e636526ec7 is in state STARTED 2026-04-05 00:50:07.664104 | orchestrator | 2026-04-05 00:50:07 | INFO  | Task 7fddfc23-0ecf-4242-844b-31339d8bcb7f is in state STARTED 2026-04-05 00:50:07.664120 | orchestrator | 2026-04-05 00:50:07 | INFO  | Task 36e09ef4-3478-4758-b2f7-0ed2ea2542bd is in state STARTED 2026-04-05 00:50:07.664132 | orchestrator | 2026-04-05 00:50:07 | INFO  | Task 1bc5b308-4c5f-49b2-92a5-1a49ed90e82d is in state STARTED 2026-04-05 00:50:07.664144 | orchestrator | 2026-04-05 00:50:07 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:50:10.694597 | orchestrator | 2026-04-05 00:50:10 | INFO  | Task bcbda999-be68-4903-8396-551ca0f1c657 is in state STARTED 2026-04-05 00:50:10.694692 | orchestrator | 2026-04-05 00:50:10 | INFO  | Task 9e274abf-efbf-4dc5-9c1e-f4e636526ec7 is in state STARTED 2026-04-05 00:50:10.695328 | orchestrator | 2026-04-05 00:50:10 | INFO  | Task 7fddfc23-0ecf-4242-844b-31339d8bcb7f is in state STARTED 2026-04-05 00:50:10.696093 | orchestrator | 2026-04-05 00:50:10 | INFO  | Task 36e09ef4-3478-4758-b2f7-0ed2ea2542bd is in state STARTED 2026-04-05 00:50:10.697718 | orchestrator | 2026-04-05 00:50:10 | INFO  | Task 1bc5b308-4c5f-49b2-92a5-1a49ed90e82d is in state STARTED 2026-04-05 00:50:10.697743 | orchestrator | 2026-04-05 00:50:10 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:50:13.731777 | orchestrator | 2026-04-05 00:50:13 | INFO  | Task bcbda999-be68-4903-8396-551ca0f1c657 is in state STARTED 2026-04-05 00:50:13.732616 | orchestrator | 2026-04-05 00:50:13 | INFO  | Task 9e274abf-efbf-4dc5-9c1e-f4e636526ec7 is in state STARTED 2026-04-05 00:50:13.734174 | orchestrator | 2026-04-05 00:50:13 | INFO  | Task 7fddfc23-0ecf-4242-844b-31339d8bcb7f is in state STARTED 2026-04-05 00:50:13.734236 | orchestrator | 2026-04-05 00:50:13 | INFO  | Task 36e09ef4-3478-4758-b2f7-0ed2ea2542bd is in state STARTED 2026-04-05 00:50:13.736685 | orchestrator | 2026-04-05 00:50:13 | INFO  | Task 1bc5b308-4c5f-49b2-92a5-1a49ed90e82d is in state STARTED 2026-04-05 00:50:13.736757 | orchestrator | 2026-04-05 00:50:13 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:50:16.822678 | orchestrator | 2026-04-05 00:50:16 | INFO  | Task bcbda999-be68-4903-8396-551ca0f1c657 is in state STARTED 2026-04-05 00:50:16.823683 | orchestrator | 2026-04-05 00:50:16 | INFO  | Task 9e274abf-efbf-4dc5-9c1e-f4e636526ec7 is in state STARTED 2026-04-05 00:50:16.824950 | orchestrator | 2026-04-05 00:50:16 | INFO  | Task 7fddfc23-0ecf-4242-844b-31339d8bcb7f is in state STARTED 2026-04-05 00:50:16.826133 | orchestrator | 2026-04-05 00:50:16 | INFO  | Task 36e09ef4-3478-4758-b2f7-0ed2ea2542bd is in state STARTED 2026-04-05 00:50:16.827303 | orchestrator | 2026-04-05 00:50:16 | INFO  | Task 1bc5b308-4c5f-49b2-92a5-1a49ed90e82d is in state STARTED 2026-04-05 00:50:16.827339 | orchestrator | 2026-04-05 00:50:16 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:50:19.869598 | orchestrator | 2026-04-05 00:50:19 | INFO  | Task bcbda999-be68-4903-8396-551ca0f1c657 is in state STARTED 2026-04-05 00:50:20.003227 | orchestrator | 2026-04-05 00:50:19 | INFO  | Task 9e274abf-efbf-4dc5-9c1e-f4e636526ec7 is in state STARTED 2026-04-05 00:50:20.003298 | orchestrator | 2026-04-05 00:50:19 | INFO  | Task 7fddfc23-0ecf-4242-844b-31339d8bcb7f is in state STARTED 2026-04-05 00:50:20.003313 | orchestrator | 2026-04-05 00:50:19 | INFO  | Task 36e09ef4-3478-4758-b2f7-0ed2ea2542bd is in state STARTED 2026-04-05 00:50:20.003323 | orchestrator | 2026-04-05 00:50:19 | INFO  | Task 1bc5b308-4c5f-49b2-92a5-1a49ed90e82d is in state STARTED 2026-04-05 00:50:20.003335 | orchestrator | 2026-04-05 00:50:19 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:50:22.915949 | orchestrator | 2026-04-05 00:50:22 | INFO  | Task bcbda999-be68-4903-8396-551ca0f1c657 is in state STARTED 2026-04-05 00:50:22.916945 | orchestrator | 2026-04-05 00:50:22 | INFO  | Task 9e274abf-efbf-4dc5-9c1e-f4e636526ec7 is in state STARTED 2026-04-05 00:50:22.918433 | orchestrator | 2026-04-05 00:50:22 | INFO  | Task 7fddfc23-0ecf-4242-844b-31339d8bcb7f is in state STARTED 2026-04-05 00:50:22.920139 | orchestrator | 2026-04-05 00:50:22 | INFO  | Task 36e09ef4-3478-4758-b2f7-0ed2ea2542bd is in state STARTED 2026-04-05 00:50:22.923896 | orchestrator | 2026-04-05 00:50:22 | INFO  | Task 1bc5b308-4c5f-49b2-92a5-1a49ed90e82d is in state STARTED 2026-04-05 00:50:22.923961 | orchestrator | 2026-04-05 00:50:22 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:50:25.958235 | orchestrator | 2026-04-05 00:50:25 | INFO  | Task bcbda999-be68-4903-8396-551ca0f1c657 is in state STARTED 2026-04-05 00:50:25.959548 | orchestrator | 2026-04-05 00:50:25 | INFO  | Task 9e274abf-efbf-4dc5-9c1e-f4e636526ec7 is in state STARTED 2026-04-05 00:50:25.960166 | orchestrator | 2026-04-05 00:50:25 | INFO  | Task 7fddfc23-0ecf-4242-844b-31339d8bcb7f is in state STARTED 2026-04-05 00:50:25.961549 | orchestrator | 2026-04-05 00:50:25 | INFO  | Task 36e09ef4-3478-4758-b2f7-0ed2ea2542bd is in state STARTED 2026-04-05 00:50:25.962227 | orchestrator | 2026-04-05 00:50:25 | INFO  | Task 1bc5b308-4c5f-49b2-92a5-1a49ed90e82d is in state STARTED 2026-04-05 00:50:25.962267 | orchestrator | 2026-04-05 00:50:25 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:50:29.017015 | orchestrator | 2026-04-05 00:50:29 | INFO  | Task bcbda999-be68-4903-8396-551ca0f1c657 is in state STARTED 2026-04-05 00:50:29.017484 | orchestrator | 2026-04-05 00:50:29 | INFO  | Task 9e274abf-efbf-4dc5-9c1e-f4e636526ec7 is in state STARTED 2026-04-05 00:50:29.019618 | orchestrator | 2026-04-05 00:50:29 | INFO  | Task 7fddfc23-0ecf-4242-844b-31339d8bcb7f is in state STARTED 2026-04-05 00:50:29.019682 | orchestrator | 2026-04-05 00:50:29 | INFO  | Task 36e09ef4-3478-4758-b2f7-0ed2ea2542bd is in state STARTED 2026-04-05 00:50:29.019915 | orchestrator | 2026-04-05 00:50:29 | INFO  | Task 1bc5b308-4c5f-49b2-92a5-1a49ed90e82d is in state STARTED 2026-04-05 00:50:29.019935 | orchestrator | 2026-04-05 00:50:29 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:50:32.074917 | orchestrator | 2026-04-05 00:50:32 | INFO  | Task bcbda999-be68-4903-8396-551ca0f1c657 is in state STARTED 2026-04-05 00:50:32.075697 | orchestrator | 2026-04-05 00:50:32 | INFO  | Task 9e274abf-efbf-4dc5-9c1e-f4e636526ec7 is in state STARTED 2026-04-05 00:50:32.076209 | orchestrator | 2026-04-05 00:50:32 | INFO  | Task 7fddfc23-0ecf-4242-844b-31339d8bcb7f is in state STARTED 2026-04-05 00:50:32.077580 | orchestrator | 2026-04-05 00:50:32 | INFO  | Task 36e09ef4-3478-4758-b2f7-0ed2ea2542bd is in state STARTED 2026-04-05 00:50:32.079884 | orchestrator | 2026-04-05 00:50:32 | INFO  | Task 1bc5b308-4c5f-49b2-92a5-1a49ed90e82d is in state STARTED 2026-04-05 00:50:32.080759 | orchestrator | 2026-04-05 00:50:32 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:50:35.125251 | orchestrator | 2026-04-05 00:50:35 | INFO  | Task bcbda999-be68-4903-8396-551ca0f1c657 is in state STARTED 2026-04-05 00:50:35.127658 | orchestrator | 2026-04-05 00:50:35 | INFO  | Task 9e274abf-efbf-4dc5-9c1e-f4e636526ec7 is in state STARTED 2026-04-05 00:50:35.128166 | orchestrator | 2026-04-05 00:50:35 | INFO  | Task 7fddfc23-0ecf-4242-844b-31339d8bcb7f is in state STARTED 2026-04-05 00:50:35.131211 | orchestrator | 2026-04-05 00:50:35 | INFO  | Task 36e09ef4-3478-4758-b2f7-0ed2ea2542bd is in state STARTED 2026-04-05 00:50:35.131977 | orchestrator | 2026-04-05 00:50:35 | INFO  | Task 1bc5b308-4c5f-49b2-92a5-1a49ed90e82d is in state STARTED 2026-04-05 00:50:35.132213 | orchestrator | 2026-04-05 00:50:35 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:50:38.277677 | orchestrator | 2026-04-05 00:50:38 | INFO  | Task bcbda999-be68-4903-8396-551ca0f1c657 is in state STARTED 2026-04-05 00:50:38.280971 | orchestrator | 2026-04-05 00:50:38 | INFO  | Task 9e274abf-efbf-4dc5-9c1e-f4e636526ec7 is in state STARTED 2026-04-05 00:50:38.282749 | orchestrator | 2026-04-05 00:50:38 | INFO  | Task 7fddfc23-0ecf-4242-844b-31339d8bcb7f is in state STARTED 2026-04-05 00:50:38.284444 | orchestrator | 2026-04-05 00:50:38 | INFO  | Task 36e09ef4-3478-4758-b2f7-0ed2ea2542bd is in state STARTED 2026-04-05 00:50:38.286701 | orchestrator | 2026-04-05 00:50:38 | INFO  | Task 1bc5b308-4c5f-49b2-92a5-1a49ed90e82d is in state STARTED 2026-04-05 00:50:38.286932 | orchestrator | 2026-04-05 00:50:38 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:50:41.323918 | orchestrator | 2026-04-05 00:50:41 | INFO  | Task bcbda999-be68-4903-8396-551ca0f1c657 is in state STARTED 2026-04-05 00:50:41.325409 | orchestrator | 2026-04-05 00:50:41 | INFO  | Task 9e274abf-efbf-4dc5-9c1e-f4e636526ec7 is in state STARTED 2026-04-05 00:50:41.326220 | orchestrator | 2026-04-05 00:50:41 | INFO  | Task 7fddfc23-0ecf-4242-844b-31339d8bcb7f is in state STARTED 2026-04-05 00:50:41.329248 | orchestrator | 2026-04-05 00:50:41 | INFO  | Task 36e09ef4-3478-4758-b2f7-0ed2ea2542bd is in state STARTED 2026-04-05 00:50:41.330258 | orchestrator | 2026-04-05 00:50:41 | INFO  | Task 1bc5b308-4c5f-49b2-92a5-1a49ed90e82d is in state STARTED 2026-04-05 00:50:41.330293 | orchestrator | 2026-04-05 00:50:41 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:50:44.386722 | orchestrator | 2026-04-05 00:50:44 | INFO  | Task bcbda999-be68-4903-8396-551ca0f1c657 is in state STARTED 2026-04-05 00:50:44.388012 | orchestrator | 2026-04-05 00:50:44 | INFO  | Task 9e274abf-efbf-4dc5-9c1e-f4e636526ec7 is in state STARTED 2026-04-05 00:50:44.389716 | orchestrator | 2026-04-05 00:50:44 | INFO  | Task 7fddfc23-0ecf-4242-844b-31339d8bcb7f is in state STARTED 2026-04-05 00:50:44.391124 | orchestrator | 2026-04-05 00:50:44 | INFO  | Task 36e09ef4-3478-4758-b2f7-0ed2ea2542bd is in state STARTED 2026-04-05 00:50:44.392371 | orchestrator | 2026-04-05 00:50:44 | INFO  | Task 1bc5b308-4c5f-49b2-92a5-1a49ed90e82d is in state STARTED 2026-04-05 00:50:44.392410 | orchestrator | 2026-04-05 00:50:44 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:50:47.421434 | orchestrator | 2026-04-05 00:50:47 | INFO  | Task f0ba49c2-0d7c-4680-86ea-ac304f7a27d0 is in state STARTED 2026-04-05 00:50:47.422407 | orchestrator | 2026-04-05 00:50:47 | INFO  | Task bcbda999-be68-4903-8396-551ca0f1c657 is in state STARTED 2026-04-05 00:50:47.425125 | orchestrator | 2026-04-05 00:50:47.425160 | orchestrator | 2026-04-05 00:50:47.425167 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-05 00:50:47.425172 | orchestrator | 2026-04-05 00:50:47.425177 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-05 00:50:47.425182 | orchestrator | Sunday 05 April 2026 00:49:31 +0000 (0:00:00.788) 0:00:00.788 ********** 2026-04-05 00:50:47.425187 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:50:47.425192 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:50:47.425197 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:50:47.425201 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:50:47.425206 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:50:47.425210 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:50:47.425215 | orchestrator | 2026-04-05 00:50:47.425219 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-05 00:50:47.425224 | orchestrator | Sunday 05 April 2026 00:49:32 +0000 (0:00:01.257) 0:00:02.046 ********** 2026-04-05 00:50:47.425229 | orchestrator | ok: [testbed-node-3] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-04-05 00:50:47.425234 | orchestrator | ok: [testbed-node-4] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-04-05 00:50:47.425238 | orchestrator | ok: [testbed-node-5] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-04-05 00:50:47.425243 | orchestrator | ok: [testbed-node-0] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-04-05 00:50:47.425259 | orchestrator | ok: [testbed-node-1] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-04-05 00:50:47.425264 | orchestrator | ok: [testbed-node-2] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-04-05 00:50:47.425269 | orchestrator | 2026-04-05 00:50:47.425273 | orchestrator | PLAY [Apply role openvswitch] ************************************************** 2026-04-05 00:50:47.425278 | orchestrator | 2026-04-05 00:50:47.425282 | orchestrator | TASK [openvswitch : include_tasks] ********************************************* 2026-04-05 00:50:47.425287 | orchestrator | Sunday 05 April 2026 00:49:33 +0000 (0:00:01.339) 0:00:03.385 ********** 2026-04-05 00:50:47.425292 | orchestrator | included: /ansible/roles/openvswitch/tasks/deploy.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 00:50:47.425298 | orchestrator | 2026-04-05 00:50:47.425302 | orchestrator | TASK [module-load : Load modules] ********************************************** 2026-04-05 00:50:47.425307 | orchestrator | Sunday 05 April 2026 00:49:35 +0000 (0:00:01.562) 0:00:04.948 ********** 2026-04-05 00:50:47.425311 | orchestrator | changed: [testbed-node-3] => (item=openvswitch) 2026-04-05 00:50:47.425316 | orchestrator | changed: [testbed-node-4] => (item=openvswitch) 2026-04-05 00:50:47.425321 | orchestrator | changed: [testbed-node-5] => (item=openvswitch) 2026-04-05 00:50:47.425325 | orchestrator | changed: [testbed-node-0] => (item=openvswitch) 2026-04-05 00:50:47.425330 | orchestrator | changed: [testbed-node-1] => (item=openvswitch) 2026-04-05 00:50:47.425334 | orchestrator | changed: [testbed-node-2] => (item=openvswitch) 2026-04-05 00:50:47.425339 | orchestrator | 2026-04-05 00:50:47.425343 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2026-04-05 00:50:47.425348 | orchestrator | Sunday 05 April 2026 00:49:37 +0000 (0:00:02.101) 0:00:07.049 ********** 2026-04-05 00:50:47.425352 | orchestrator | changed: [testbed-node-4] => (item=openvswitch) 2026-04-05 00:50:47.425428 | orchestrator | changed: [testbed-node-3] => (item=openvswitch) 2026-04-05 00:50:47.425433 | orchestrator | changed: [testbed-node-5] => (item=openvswitch) 2026-04-05 00:50:47.425438 | orchestrator | changed: [testbed-node-0] => (item=openvswitch) 2026-04-05 00:50:47.425449 | orchestrator | changed: [testbed-node-1] => (item=openvswitch) 2026-04-05 00:50:47.425454 | orchestrator | changed: [testbed-node-2] => (item=openvswitch) 2026-04-05 00:50:47.425458 | orchestrator | 2026-04-05 00:50:47.425462 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2026-04-05 00:50:47.425466 | orchestrator | Sunday 05 April 2026 00:49:39 +0000 (0:00:02.301) 0:00:09.351 ********** 2026-04-05 00:50:47.425471 | orchestrator | skipping: [testbed-node-3] => (item=openvswitch)  2026-04-05 00:50:47.425475 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:50:47.425480 | orchestrator | skipping: [testbed-node-4] => (item=openvswitch)  2026-04-05 00:50:47.425484 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:50:47.425488 | orchestrator | skipping: [testbed-node-5] => (item=openvswitch)  2026-04-05 00:50:47.425503 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:50:47.425507 | orchestrator | skipping: [testbed-node-0] => (item=openvswitch)  2026-04-05 00:50:47.425512 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:50:47.425516 | orchestrator | skipping: [testbed-node-1] => (item=openvswitch)  2026-04-05 00:50:47.425520 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:50:47.425524 | orchestrator | skipping: [testbed-node-2] => (item=openvswitch)  2026-04-05 00:50:47.425529 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:50:47.425533 | orchestrator | 2026-04-05 00:50:47.425537 | orchestrator | TASK [openvswitch : Create /run/openvswitch directory on host] ***************** 2026-04-05 00:50:47.425541 | orchestrator | Sunday 05 April 2026 00:49:42 +0000 (0:00:02.094) 0:00:11.446 ********** 2026-04-05 00:50:47.425546 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:50:47.425550 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:50:47.425554 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:50:47.425558 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:50:47.425569 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:50:47.425573 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:50:47.425577 | orchestrator | 2026-04-05 00:50:47.425582 | orchestrator | TASK [openvswitch : Ensuring config directories exist] ************************* 2026-04-05 00:50:47.425586 | orchestrator | Sunday 05 April 2026 00:49:43 +0000 (0:00:00.956) 0:00:12.403 ********** 2026-04-05 00:50:47.425598 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-05 00:50:47.425605 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-05 00:50:47.425610 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-05 00:50:47.425617 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-05 00:50:47.425622 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-05 00:50:47.425630 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-05 00:50:47.425639 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-05 00:50:47.425644 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-05 00:50:47.425648 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-05 00:50:47.425655 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-05 00:50:47.425660 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-05 00:50:47.425670 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-05 00:50:47.425675 | orchestrator | 2026-04-05 00:50:47.425679 | orchestrator | TASK [openvswitch : Copying over config.json files for services] *************** 2026-04-05 00:50:47.425684 | orchestrator | Sunday 05 April 2026 00:49:45 +0000 (0:00:02.356) 0:00:14.759 ********** 2026-04-05 00:50:47.425688 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-05 00:50:47.425693 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-05 00:50:47.425697 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-05 00:50:47.425704 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-05 00:50:47.425711 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-05 00:50:47.425719 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-05 00:50:47.425724 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-05 00:50:47.425729 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-05 00:50:47.425733 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-05 00:50:47.425740 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-05 00:50:47.425747 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-05 00:50:47.425755 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-05 00:50:47.425760 | orchestrator | 2026-04-05 00:50:47.425764 | orchestrator | TASK [openvswitch : Copying over ovs-vsctl wrapper] **************************** 2026-04-05 00:50:47.425769 | orchestrator | Sunday 05 April 2026 00:49:50 +0000 (0:00:04.850) 0:00:19.610 ********** 2026-04-05 00:50:47.425773 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:50:47.425777 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:50:47.425782 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:50:47.425786 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:50:47.425790 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:50:47.425794 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:50:47.425799 | orchestrator | 2026-04-05 00:50:47.425803 | orchestrator | TASK [openvswitch : Check openvswitch containers] ****************************** 2026-04-05 00:50:47.425807 | orchestrator | Sunday 05 April 2026 00:49:51 +0000 (0:00:01.579) 0:00:21.189 ********** 2026-04-05 00:50:47.425812 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-05 00:50:47.425816 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-05 00:50:47.425826 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-05 00:50:47.425830 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-05 00:50:47.425837 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-05 00:50:47.425842 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-05 00:50:47.425847 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-05 00:50:47.425854 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-05 00:50:47.425864 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-05 00:50:47.425868 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-05 00:50:47.425875 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-05 00:50:47.425880 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-05 00:50:47.425885 | orchestrator | 2026-04-05 00:50:47.425889 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-04-05 00:50:47.425893 | orchestrator | Sunday 05 April 2026 00:49:55 +0000 (0:00:03.301) 0:00:24.491 ********** 2026-04-05 00:50:47.425898 | orchestrator | 2026-04-05 00:50:47.425902 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-04-05 00:50:47.425906 | orchestrator | Sunday 05 April 2026 00:49:55 +0000 (0:00:00.193) 0:00:24.684 ********** 2026-04-05 00:50:47.425911 | orchestrator | 2026-04-05 00:50:47.425915 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-04-05 00:50:47.425919 | orchestrator | Sunday 05 April 2026 00:49:55 +0000 (0:00:00.130) 0:00:24.815 ********** 2026-04-05 00:50:47.425923 | orchestrator | 2026-04-05 00:50:47.425930 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-04-05 00:50:47.425934 | orchestrator | Sunday 05 April 2026 00:49:55 +0000 (0:00:00.239) 0:00:25.055 ********** 2026-04-05 00:50:47.425939 | orchestrator | 2026-04-05 00:50:47.425943 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-04-05 00:50:47.425947 | orchestrator | Sunday 05 April 2026 00:49:56 +0000 (0:00:00.714) 0:00:25.769 ********** 2026-04-05 00:50:47.425951 | orchestrator | 2026-04-05 00:50:47.425956 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-04-05 00:50:47.425960 | orchestrator | Sunday 05 April 2026 00:49:56 +0000 (0:00:00.330) 0:00:26.100 ********** 2026-04-05 00:50:47.425964 | orchestrator | 2026-04-05 00:50:47.425968 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-db-server container] ******** 2026-04-05 00:50:47.425973 | orchestrator | Sunday 05 April 2026 00:49:56 +0000 (0:00:00.133) 0:00:26.234 ********** 2026-04-05 00:50:47.425977 | orchestrator | changed: [testbed-node-5] 2026-04-05 00:50:47.425981 | orchestrator | changed: [testbed-node-4] 2026-04-05 00:50:47.425985 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:50:47.425990 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:50:47.425996 | orchestrator | changed: [testbed-node-3] 2026-04-05 00:50:47.426000 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:50:47.426005 | orchestrator | 2026-04-05 00:50:47.426009 | orchestrator | RUNNING HANDLER [openvswitch : Waiting for openvswitch_db service to be ready] *** 2026-04-05 00:50:47.426035 | orchestrator | Sunday 05 April 2026 00:50:07 +0000 (0:00:10.480) 0:00:36.714 ********** 2026-04-05 00:50:47.426041 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:50:47.426046 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:50:47.426050 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:50:47.426054 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:50:47.426059 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:50:47.426063 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:50:47.426067 | orchestrator | 2026-04-05 00:50:47.426073 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-vswitchd container] ********* 2026-04-05 00:50:47.426078 | orchestrator | Sunday 05 April 2026 00:50:08 +0000 (0:00:01.674) 0:00:38.389 ********** 2026-04-05 00:50:47.426083 | orchestrator | changed: [testbed-node-3] 2026-04-05 00:50:47.426089 | orchestrator | changed: [testbed-node-4] 2026-04-05 00:50:47.426094 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:50:47.426098 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:50:47.426104 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:50:47.426108 | orchestrator | changed: [testbed-node-5] 2026-04-05 00:50:47.426113 | orchestrator | 2026-04-05 00:50:47.426118 | orchestrator | TASK [openvswitch : Set system-id, hostname and hw-offload] ******************** 2026-04-05 00:50:47.426123 | orchestrator | Sunday 05 April 2026 00:50:17 +0000 (0:00:08.737) 0:00:47.127 ********** 2026-04-05 00:50:47.426128 | orchestrator | changed: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-4'}) 2026-04-05 00:50:47.426134 | orchestrator | changed: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-3'}) 2026-04-05 00:50:47.426139 | orchestrator | changed: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-5'}) 2026-04-05 00:50:47.426144 | orchestrator | changed: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-1'}) 2026-04-05 00:50:47.426149 | orchestrator | changed: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-0'}) 2026-04-05 00:50:47.426157 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:50:47 | INFO  | Task 9e274abf-efbf-4dc5-9c1e-f4e636526ec7 is in state SUCCESS 2026-04-05 00:50:47.426164 | orchestrator | => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-2'}) 2026-04-05 00:50:47.426169 | orchestrator | changed: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-4'}) 2026-04-05 00:50:47.426174 | orchestrator | changed: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-3'}) 2026-04-05 00:50:47.426182 | orchestrator | changed: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-5'}) 2026-04-05 00:50:47.426187 | orchestrator | changed: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-1'}) 2026-04-05 00:50:47.426192 | orchestrator | changed: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-2'}) 2026-04-05 00:50:47.426197 | orchestrator | changed: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-0'}) 2026-04-05 00:50:47.426201 | orchestrator | ok: [testbed-node-3] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-04-05 00:50:47.426206 | orchestrator | ok: [testbed-node-4] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-04-05 00:50:47.426211 | orchestrator | ok: [testbed-node-5] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-04-05 00:50:47.426216 | orchestrator | ok: [testbed-node-1] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-04-05 00:50:47.426221 | orchestrator | ok: [testbed-node-2] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-04-05 00:50:47.426226 | orchestrator | ok: [testbed-node-0] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-04-05 00:50:47.426230 | orchestrator | 2026-04-05 00:50:47.426235 | orchestrator | TASK [openvswitch : Ensuring OVS bridge is properly setup] ********************* 2026-04-05 00:50:47.426240 | orchestrator | Sunday 05 April 2026 00:50:27 +0000 (0:00:10.020) 0:00:57.147 ********** 2026-04-05 00:50:47.426246 | orchestrator | skipping: [testbed-node-3] => (item=br-ex)  2026-04-05 00:50:47.426251 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:50:47.426256 | orchestrator | skipping: [testbed-node-4] => (item=br-ex)  2026-04-05 00:50:47.426261 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:50:47.426266 | orchestrator | skipping: [testbed-node-5] => (item=br-ex)  2026-04-05 00:50:47.426271 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:50:47.426276 | orchestrator | changed: [testbed-node-1] => (item=br-ex) 2026-04-05 00:50:47.426280 | orchestrator | changed: [testbed-node-0] => (item=br-ex) 2026-04-05 00:50:47.426285 | orchestrator | changed: [testbed-node-2] => (item=br-ex) 2026-04-05 00:50:47.426290 | orchestrator | 2026-04-05 00:50:47.426295 | orchestrator | TASK [openvswitch : Ensuring OVS ports are properly setup] ********************* 2026-04-05 00:50:47.426301 | orchestrator | Sunday 05 April 2026 00:50:30 +0000 (0:00:02.741) 0:00:59.889 ********** 2026-04-05 00:50:47.426306 | orchestrator | skipping: [testbed-node-3] => (item=['br-ex', 'vxlan0'])  2026-04-05 00:50:47.426311 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:50:47.426316 | orchestrator | skipping: [testbed-node-4] => (item=['br-ex', 'vxlan0'])  2026-04-05 00:50:47.426320 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:50:47.426325 | orchestrator | skipping: [testbed-node-5] => (item=['br-ex', 'vxlan0'])  2026-04-05 00:50:47.426330 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:50:47.426335 | orchestrator | changed: [testbed-node-2] => (item=['br-ex', 'vxlan0']) 2026-04-05 00:50:47.426341 | orchestrator | changed: [testbed-node-0] => (item=['br-ex', 'vxlan0']) 2026-04-05 00:50:47.426346 | orchestrator | changed: [testbed-node-1] => (item=['br-ex', 'vxlan0']) 2026-04-05 00:50:47.426350 | orchestrator | 2026-04-05 00:50:47.426355 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-vswitchd container] ********* 2026-04-05 00:50:47.426360 | orchestrator | Sunday 05 April 2026 00:50:35 +0000 (0:00:04.667) 0:01:04.556 ********** 2026-04-05 00:50:47.426365 | orchestrator | changed: [testbed-node-3] 2026-04-05 00:50:47.426370 | orchestrator | changed: [testbed-node-4] 2026-04-05 00:50:47.426375 | orchestrator | changed: [testbed-node-5] 2026-04-05 00:50:47.426380 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:50:47.426390 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:50:47.426395 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:50:47.426400 | orchestrator | 2026-04-05 00:50:47.426404 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-05 00:50:47.426410 | orchestrator | testbed-node-0 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-04-05 00:50:47.426415 | orchestrator | testbed-node-1 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-04-05 00:50:47.426420 | orchestrator | testbed-node-2 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-04-05 00:50:47.426425 | orchestrator | testbed-node-3 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-04-05 00:50:47.426433 | orchestrator | testbed-node-4 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-04-05 00:50:47.426438 | orchestrator | testbed-node-5 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-04-05 00:50:47.426443 | orchestrator | 2026-04-05 00:50:47.426448 | orchestrator | 2026-04-05 00:50:47.426453 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-05 00:50:47.426459 | orchestrator | Sunday 05 April 2026 00:50:44 +0000 (0:00:08.937) 0:01:13.493 ********** 2026-04-05 00:50:47.426464 | orchestrator | =============================================================================== 2026-04-05 00:50:47.426468 | orchestrator | openvswitch : Restart openvswitch-vswitchd container ------------------- 17.68s 2026-04-05 00:50:47.426473 | orchestrator | openvswitch : Restart openvswitch-db-server container ------------------ 10.48s 2026-04-05 00:50:47.426477 | orchestrator | openvswitch : Set system-id, hostname and hw-offload ------------------- 10.02s 2026-04-05 00:50:47.426481 | orchestrator | openvswitch : Copying over config.json files for services --------------- 4.85s 2026-04-05 00:50:47.426485 | orchestrator | openvswitch : Ensuring OVS ports are properly setup --------------------- 4.67s 2026-04-05 00:50:47.426490 | orchestrator | openvswitch : Check openvswitch containers ------------------------------ 3.30s 2026-04-05 00:50:47.426503 | orchestrator | openvswitch : Ensuring OVS bridge is properly setup --------------------- 2.74s 2026-04-05 00:50:47.426507 | orchestrator | openvswitch : Ensuring config directories exist ------------------------- 2.36s 2026-04-05 00:50:47.426512 | orchestrator | module-load : Persist modules via modules-load.d ------------------------ 2.30s 2026-04-05 00:50:47.426516 | orchestrator | module-load : Load modules ---------------------------------------------- 2.10s 2026-04-05 00:50:47.426520 | orchestrator | module-load : Drop module persistence ----------------------------------- 2.09s 2026-04-05 00:50:47.426524 | orchestrator | openvswitch : Flush Handlers -------------------------------------------- 1.74s 2026-04-05 00:50:47.426529 | orchestrator | openvswitch : Waiting for openvswitch_db service to be ready ------------ 1.67s 2026-04-05 00:50:47.426533 | orchestrator | openvswitch : Copying over ovs-vsctl wrapper ---------------------------- 1.58s 2026-04-05 00:50:47.426537 | orchestrator | openvswitch : include_tasks --------------------------------------------- 1.56s 2026-04-05 00:50:47.426541 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.34s 2026-04-05 00:50:47.426546 | orchestrator | Group hosts based on Kolla action --------------------------------------- 1.26s 2026-04-05 00:50:47.426550 | orchestrator | openvswitch : Create /run/openvswitch directory on host ----------------- 0.96s 2026-04-05 00:50:47.426554 | orchestrator | 2026-04-05 00:50:47 | INFO  | Task 7fddfc23-0ecf-4242-844b-31339d8bcb7f is in state STARTED 2026-04-05 00:50:47.426559 | orchestrator | 2026-04-05 00:50:47 | INFO  | Task 36e09ef4-3478-4758-b2f7-0ed2ea2542bd is in state STARTED 2026-04-05 00:50:47.427260 | orchestrator | 2026-04-05 00:50:47 | INFO  | Task 1bc5b308-4c5f-49b2-92a5-1a49ed90e82d is in state STARTED 2026-04-05 00:50:47.427320 | orchestrator | 2026-04-05 00:50:47 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:50:50.465302 | orchestrator | 2026-04-05 00:50:50 | INFO  | Task f0ba49c2-0d7c-4680-86ea-ac304f7a27d0 is in state STARTED 2026-04-05 00:50:50.465384 | orchestrator | 2026-04-05 00:50:50 | INFO  | Task bcbda999-be68-4903-8396-551ca0f1c657 is in state STARTED 2026-04-05 00:50:50.470663 | orchestrator | 2026-04-05 00:50:50 | INFO  | Task 7fddfc23-0ecf-4242-844b-31339d8bcb7f is in state STARTED 2026-04-05 00:50:50.470747 | orchestrator | 2026-04-05 00:50:50 | INFO  | Task 36e09ef4-3478-4758-b2f7-0ed2ea2542bd is in state STARTED 2026-04-05 00:50:50.470762 | orchestrator | 2026-04-05 00:50:50 | INFO  | Task 1bc5b308-4c5f-49b2-92a5-1a49ed90e82d is in state STARTED 2026-04-05 00:50:50.470774 | orchestrator | 2026-04-05 00:50:50 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:50:53.517475 | orchestrator | 2026-04-05 00:50:53 | INFO  | Task f0ba49c2-0d7c-4680-86ea-ac304f7a27d0 is in state STARTED 2026-04-05 00:50:53.519615 | orchestrator | 2026-04-05 00:50:53 | INFO  | Task bcbda999-be68-4903-8396-551ca0f1c657 is in state STARTED 2026-04-05 00:50:53.520639 | orchestrator | 2026-04-05 00:50:53 | INFO  | Task 7fddfc23-0ecf-4242-844b-31339d8bcb7f is in state STARTED 2026-04-05 00:50:53.522539 | orchestrator | 2026-04-05 00:50:53 | INFO  | Task 36e09ef4-3478-4758-b2f7-0ed2ea2542bd is in state STARTED 2026-04-05 00:50:53.528067 | orchestrator | 2026-04-05 00:50:53 | INFO  | Task 1bc5b308-4c5f-49b2-92a5-1a49ed90e82d is in state STARTED 2026-04-05 00:50:53.528106 | orchestrator | 2026-04-05 00:50:53 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:50:56.569022 | orchestrator | 2026-04-05 00:50:56 | INFO  | Task f0ba49c2-0d7c-4680-86ea-ac304f7a27d0 is in state STARTED 2026-04-05 00:50:56.570165 | orchestrator | 2026-04-05 00:50:56 | INFO  | Task bcbda999-be68-4903-8396-551ca0f1c657 is in state STARTED 2026-04-05 00:50:56.571739 | orchestrator | 2026-04-05 00:50:56 | INFO  | Task 7fddfc23-0ecf-4242-844b-31339d8bcb7f is in state STARTED 2026-04-05 00:50:56.572871 | orchestrator | 2026-04-05 00:50:56 | INFO  | Task 36e09ef4-3478-4758-b2f7-0ed2ea2542bd is in state STARTED 2026-04-05 00:50:56.575909 | orchestrator | 2026-04-05 00:50:56 | INFO  | Task 1bc5b308-4c5f-49b2-92a5-1a49ed90e82d is in state STARTED 2026-04-05 00:50:56.576014 | orchestrator | 2026-04-05 00:50:56 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:50:59.645039 | orchestrator | 2026-04-05 00:50:59 | INFO  | Task f0ba49c2-0d7c-4680-86ea-ac304f7a27d0 is in state STARTED 2026-04-05 00:50:59.645229 | orchestrator | 2026-04-05 00:50:59 | INFO  | Task bcbda999-be68-4903-8396-551ca0f1c657 is in state STARTED 2026-04-05 00:50:59.645256 | orchestrator | 2026-04-05 00:50:59 | INFO  | Task 7fddfc23-0ecf-4242-844b-31339d8bcb7f is in state STARTED 2026-04-05 00:50:59.646164 | orchestrator | 2026-04-05 00:50:59 | INFO  | Task 36e09ef4-3478-4758-b2f7-0ed2ea2542bd is in state STARTED 2026-04-05 00:50:59.647307 | orchestrator | 2026-04-05 00:50:59 | INFO  | Task 1bc5b308-4c5f-49b2-92a5-1a49ed90e82d is in state STARTED 2026-04-05 00:50:59.647335 | orchestrator | 2026-04-05 00:50:59 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:51:02.703092 | orchestrator | 2026-04-05 00:51:02 | INFO  | Task f0ba49c2-0d7c-4680-86ea-ac304f7a27d0 is in state STARTED 2026-04-05 00:51:02.703539 | orchestrator | 2026-04-05 00:51:02 | INFO  | Task bcbda999-be68-4903-8396-551ca0f1c657 is in state STARTED 2026-04-05 00:51:02.705684 | orchestrator | 2026-04-05 00:51:02 | INFO  | Task 7fddfc23-0ecf-4242-844b-31339d8bcb7f is in state STARTED 2026-04-05 00:51:02.705797 | orchestrator | 2026-04-05 00:51:02 | INFO  | Task 36e09ef4-3478-4758-b2f7-0ed2ea2542bd is in state STARTED 2026-04-05 00:51:02.706345 | orchestrator | 2026-04-05 00:51:02 | INFO  | Task 1bc5b308-4c5f-49b2-92a5-1a49ed90e82d is in state STARTED 2026-04-05 00:51:02.706391 | orchestrator | 2026-04-05 00:51:02 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:51:05.748694 | orchestrator | 2026-04-05 00:51:05 | INFO  | Task f0ba49c2-0d7c-4680-86ea-ac304f7a27d0 is in state STARTED 2026-04-05 00:51:05.748794 | orchestrator | 2026-04-05 00:51:05 | INFO  | Task bcbda999-be68-4903-8396-551ca0f1c657 is in state STARTED 2026-04-05 00:51:05.751025 | orchestrator | 2026-04-05 00:51:05 | INFO  | Task 7fddfc23-0ecf-4242-844b-31339d8bcb7f is in state STARTED 2026-04-05 00:51:05.751747 | orchestrator | 2026-04-05 00:51:05 | INFO  | Task 36e09ef4-3478-4758-b2f7-0ed2ea2542bd is in state STARTED 2026-04-05 00:51:05.752548 | orchestrator | 2026-04-05 00:51:05 | INFO  | Task 1bc5b308-4c5f-49b2-92a5-1a49ed90e82d is in state STARTED 2026-04-05 00:51:05.752572 | orchestrator | 2026-04-05 00:51:05 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:51:08.814820 | orchestrator | 2026-04-05 00:51:08 | INFO  | Task f0ba49c2-0d7c-4680-86ea-ac304f7a27d0 is in state STARTED 2026-04-05 00:51:08.815133 | orchestrator | 2026-04-05 00:51:08 | INFO  | Task bcbda999-be68-4903-8396-551ca0f1c657 is in state STARTED 2026-04-05 00:51:08.816191 | orchestrator | 2026-04-05 00:51:08 | INFO  | Task 7fddfc23-0ecf-4242-844b-31339d8bcb7f is in state STARTED 2026-04-05 00:51:08.819358 | orchestrator | 2026-04-05 00:51:08 | INFO  | Task 36e09ef4-3478-4758-b2f7-0ed2ea2542bd is in state STARTED 2026-04-05 00:51:08.820273 | orchestrator | 2026-04-05 00:51:08 | INFO  | Task 1bc5b308-4c5f-49b2-92a5-1a49ed90e82d is in state STARTED 2026-04-05 00:51:08.820322 | orchestrator | 2026-04-05 00:51:08 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:51:11.863435 | orchestrator | 2026-04-05 00:51:11 | INFO  | Task f0ba49c2-0d7c-4680-86ea-ac304f7a27d0 is in state STARTED 2026-04-05 00:51:11.867583 | orchestrator | 2026-04-05 00:51:11 | INFO  | Task bcbda999-be68-4903-8396-551ca0f1c657 is in state STARTED 2026-04-05 00:51:11.872606 | orchestrator | 2026-04-05 00:51:11 | INFO  | Task 7fddfc23-0ecf-4242-844b-31339d8bcb7f is in state STARTED 2026-04-05 00:51:11.872698 | orchestrator | 2026-04-05 00:51:11 | INFO  | Task 36e09ef4-3478-4758-b2f7-0ed2ea2542bd is in state STARTED 2026-04-05 00:51:11.872709 | orchestrator | 2026-04-05 00:51:11 | INFO  | Task 1bc5b308-4c5f-49b2-92a5-1a49ed90e82d is in state STARTED 2026-04-05 00:51:11.872718 | orchestrator | 2026-04-05 00:51:11 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:51:14.926627 | orchestrator | 2026-04-05 00:51:14 | INFO  | Task f0ba49c2-0d7c-4680-86ea-ac304f7a27d0 is in state STARTED 2026-04-05 00:51:14.926810 | orchestrator | 2026-04-05 00:51:14 | INFO  | Task bcbda999-be68-4903-8396-551ca0f1c657 is in state STARTED 2026-04-05 00:51:14.928174 | orchestrator | 2026-04-05 00:51:14 | INFO  | Task 7fddfc23-0ecf-4242-844b-31339d8bcb7f is in state STARTED 2026-04-05 00:51:14.929047 | orchestrator | 2026-04-05 00:51:14 | INFO  | Task 36e09ef4-3478-4758-b2f7-0ed2ea2542bd is in state STARTED 2026-04-05 00:51:14.930262 | orchestrator | 2026-04-05 00:51:14 | INFO  | Task 1bc5b308-4c5f-49b2-92a5-1a49ed90e82d is in state STARTED 2026-04-05 00:51:14.930297 | orchestrator | 2026-04-05 00:51:14 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:51:17.966322 | orchestrator | 2026-04-05 00:51:17 | INFO  | Task f0ba49c2-0d7c-4680-86ea-ac304f7a27d0 is in state STARTED 2026-04-05 00:51:17.968601 | orchestrator | 2026-04-05 00:51:17 | INFO  | Task bcbda999-be68-4903-8396-551ca0f1c657 is in state STARTED 2026-04-05 00:51:17.970126 | orchestrator | 2026-04-05 00:51:17 | INFO  | Task 7fddfc23-0ecf-4242-844b-31339d8bcb7f is in state STARTED 2026-04-05 00:51:17.972175 | orchestrator | 2026-04-05 00:51:17 | INFO  | Task 36e09ef4-3478-4758-b2f7-0ed2ea2542bd is in state STARTED 2026-04-05 00:51:17.976024 | orchestrator | 2026-04-05 00:51:17 | INFO  | Task 1bc5b308-4c5f-49b2-92a5-1a49ed90e82d is in state STARTED 2026-04-05 00:51:17.976076 | orchestrator | 2026-04-05 00:51:17 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:51:21.369475 | orchestrator | 2026-04-05 00:51:21 | INFO  | Task f0ba49c2-0d7c-4680-86ea-ac304f7a27d0 is in state STARTED 2026-04-05 00:51:21.370326 | orchestrator | 2026-04-05 00:51:21 | INFO  | Task bcbda999-be68-4903-8396-551ca0f1c657 is in state STARTED 2026-04-05 00:51:21.371269 | orchestrator | 2026-04-05 00:51:21 | INFO  | Task 7fddfc23-0ecf-4242-844b-31339d8bcb7f is in state STARTED 2026-04-05 00:51:21.372157 | orchestrator | 2026-04-05 00:51:21 | INFO  | Task 36e09ef4-3478-4758-b2f7-0ed2ea2542bd is in state STARTED 2026-04-05 00:51:21.372478 | orchestrator | 2026-04-05 00:51:21 | INFO  | Task 1bc5b308-4c5f-49b2-92a5-1a49ed90e82d is in state STARTED 2026-04-05 00:51:21.372890 | orchestrator | 2026-04-05 00:51:21 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:51:24.581594 | orchestrator | 2026-04-05 00:51:24 | INFO  | Task f0ba49c2-0d7c-4680-86ea-ac304f7a27d0 is in state STARTED 2026-04-05 00:51:24.582202 | orchestrator | 2026-04-05 00:51:24 | INFO  | Task bcbda999-be68-4903-8396-551ca0f1c657 is in state STARTED 2026-04-05 00:51:24.584577 | orchestrator | 2026-04-05 00:51:24 | INFO  | Task 7fddfc23-0ecf-4242-844b-31339d8bcb7f is in state STARTED 2026-04-05 00:51:24.585577 | orchestrator | 2026-04-05 00:51:24 | INFO  | Task 36e09ef4-3478-4758-b2f7-0ed2ea2542bd is in state STARTED 2026-04-05 00:51:24.587181 | orchestrator | 2026-04-05 00:51:24 | INFO  | Task 1bc5b308-4c5f-49b2-92a5-1a49ed90e82d is in state STARTED 2026-04-05 00:51:24.587384 | orchestrator | 2026-04-05 00:51:24 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:51:28.036382 | orchestrator | 2026-04-05 00:51:28 | INFO  | Task f0ba49c2-0d7c-4680-86ea-ac304f7a27d0 is in state STARTED 2026-04-05 00:51:28.167268 | orchestrator | 2026-04-05 00:51:28 | INFO  | Task bcbda999-be68-4903-8396-551ca0f1c657 is in state STARTED 2026-04-05 00:51:28.208898 | orchestrator | 2026-04-05 00:51:28 | INFO  | Task 7fddfc23-0ecf-4242-844b-31339d8bcb7f is in state STARTED 2026-04-05 00:51:28.208983 | orchestrator | 2026-04-05 00:51:28 | INFO  | Task 36e09ef4-3478-4758-b2f7-0ed2ea2542bd is in state STARTED 2026-04-05 00:51:28.208989 | orchestrator | 2026-04-05 00:51:28 | INFO  | Task 1bc5b308-4c5f-49b2-92a5-1a49ed90e82d is in state STARTED 2026-04-05 00:51:28.208995 | orchestrator | 2026-04-05 00:51:28 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:51:31.317383 | orchestrator | 2026-04-05 00:51:31 | INFO  | Task f0ba49c2-0d7c-4680-86ea-ac304f7a27d0 is in state STARTED 2026-04-05 00:51:31.317774 | orchestrator | 2026-04-05 00:51:31 | INFO  | Task bcbda999-be68-4903-8396-551ca0f1c657 is in state STARTED 2026-04-05 00:51:31.322566 | orchestrator | 2026-04-05 00:51:31 | INFO  | Task 7fddfc23-0ecf-4242-844b-31339d8bcb7f is in state STARTED 2026-04-05 00:51:31.323304 | orchestrator | 2026-04-05 00:51:31 | INFO  | Task 36e09ef4-3478-4758-b2f7-0ed2ea2542bd is in state STARTED 2026-04-05 00:51:31.326281 | orchestrator | 2026-04-05 00:51:31 | INFO  | Task 1bc5b308-4c5f-49b2-92a5-1a49ed90e82d is in state STARTED 2026-04-05 00:51:31.326459 | orchestrator | 2026-04-05 00:51:31 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:51:34.381498 | orchestrator | 2026-04-05 00:51:34 | INFO  | Task f0ba49c2-0d7c-4680-86ea-ac304f7a27d0 is in state STARTED 2026-04-05 00:51:34.381553 | orchestrator | 2026-04-05 00:51:34 | INFO  | Task bcbda999-be68-4903-8396-551ca0f1c657 is in state STARTED 2026-04-05 00:51:34.381559 | orchestrator | 2026-04-05 00:51:34 | INFO  | Task 7fddfc23-0ecf-4242-844b-31339d8bcb7f is in state STARTED 2026-04-05 00:51:34.381564 | orchestrator | 2026-04-05 00:51:34 | INFO  | Task 36e09ef4-3478-4758-b2f7-0ed2ea2542bd is in state STARTED 2026-04-05 00:51:34.381569 | orchestrator | 2026-04-05 00:51:34 | INFO  | Task 1bc5b308-4c5f-49b2-92a5-1a49ed90e82d is in state STARTED 2026-04-05 00:51:34.381573 | orchestrator | 2026-04-05 00:51:34 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:51:37.485186 | orchestrator | 2026-04-05 00:51:37 | INFO  | Task f0ba49c2-0d7c-4680-86ea-ac304f7a27d0 is in state STARTED 2026-04-05 00:51:37.486418 | orchestrator | 2026-04-05 00:51:37 | INFO  | Task bcbda999-be68-4903-8396-551ca0f1c657 is in state STARTED 2026-04-05 00:51:37.487493 | orchestrator | 2026-04-05 00:51:37 | INFO  | Task 7fddfc23-0ecf-4242-844b-31339d8bcb7f is in state STARTED 2026-04-05 00:51:37.488383 | orchestrator | 2026-04-05 00:51:37 | INFO  | Task 36e09ef4-3478-4758-b2f7-0ed2ea2542bd is in state STARTED 2026-04-05 00:51:37.490269 | orchestrator | 2026-04-05 00:51:37 | INFO  | Task 1bc5b308-4c5f-49b2-92a5-1a49ed90e82d is in state STARTED 2026-04-05 00:51:37.490292 | orchestrator | 2026-04-05 00:51:37 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:51:40.594340 | orchestrator | 2026-04-05 00:51:40 | INFO  | Task f0ba49c2-0d7c-4680-86ea-ac304f7a27d0 is in state STARTED 2026-04-05 00:51:40.596616 | orchestrator | 2026-04-05 00:51:40 | INFO  | Task bcbda999-be68-4903-8396-551ca0f1c657 is in state STARTED 2026-04-05 00:51:40.596973 | orchestrator | 2026-04-05 00:51:40 | INFO  | Task 7fddfc23-0ecf-4242-844b-31339d8bcb7f is in state SUCCESS 2026-04-05 00:51:40.597049 | orchestrator | 2026-04-05 00:51:40.598989 | orchestrator | 2026-04-05 00:51:40.599030 | orchestrator | PLAY [Prepare all k3s nodes] *************************************************** 2026-04-05 00:51:40.599045 | orchestrator | 2026-04-05 00:51:40.599058 | orchestrator | TASK [k3s_prereq : Validating arguments against arg spec 'main' - Prerequisites] *** 2026-04-05 00:51:40.599071 | orchestrator | Sunday 05 April 2026 00:46:47 +0000 (0:00:00.291) 0:00:00.291 ********** 2026-04-05 00:51:40.599083 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:51:40.599128 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:51:40.599142 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:51:40.599155 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:51:40.599168 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:51:40.599180 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:51:40.599192 | orchestrator | 2026-04-05 00:51:40.599238 | orchestrator | TASK [k3s_prereq : Set same timezone on every Server] ************************** 2026-04-05 00:51:40.599251 | orchestrator | Sunday 05 April 2026 00:46:48 +0000 (0:00:00.568) 0:00:00.860 ********** 2026-04-05 00:51:40.599299 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:51:40.599313 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:51:40.599325 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:51:40.599336 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:51:40.599349 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:51:40.599361 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:51:40.599373 | orchestrator | 2026-04-05 00:51:40.599402 | orchestrator | TASK [k3s_prereq : Set SELinux to disabled state] ****************************** 2026-04-05 00:51:40.599414 | orchestrator | Sunday 05 April 2026 00:46:48 +0000 (0:00:00.676) 0:00:01.537 ********** 2026-04-05 00:51:40.599427 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:51:40.599516 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:51:40.599531 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:51:40.599542 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:51:40.599554 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:51:40.599566 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:51:40.599578 | orchestrator | 2026-04-05 00:51:40.599590 | orchestrator | TASK [k3s_prereq : Enable IPv4 forwarding] ************************************* 2026-04-05 00:51:40.599602 | orchestrator | Sunday 05 April 2026 00:46:49 +0000 (0:00:00.559) 0:00:02.096 ********** 2026-04-05 00:51:40.599614 | orchestrator | changed: [testbed-node-5] 2026-04-05 00:51:40.599641 | orchestrator | changed: [testbed-node-3] 2026-04-05 00:51:40.599652 | orchestrator | changed: [testbed-node-4] 2026-04-05 00:51:40.599663 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:51:40.599675 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:51:40.599687 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:51:40.599698 | orchestrator | 2026-04-05 00:51:40.599710 | orchestrator | TASK [k3s_prereq : Enable IPv6 forwarding] ************************************* 2026-04-05 00:51:40.599721 | orchestrator | Sunday 05 April 2026 00:46:51 +0000 (0:00:02.270) 0:00:04.367 ********** 2026-04-05 00:51:40.599732 | orchestrator | changed: [testbed-node-3] 2026-04-05 00:51:40.599743 | orchestrator | changed: [testbed-node-4] 2026-04-05 00:51:40.599754 | orchestrator | changed: [testbed-node-5] 2026-04-05 00:51:40.599764 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:51:40.599775 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:51:40.599786 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:51:40.599797 | orchestrator | 2026-04-05 00:51:40.599810 | orchestrator | TASK [k3s_prereq : Enable IPv6 router advertisements] ************************** 2026-04-05 00:51:40.599822 | orchestrator | Sunday 05 April 2026 00:46:52 +0000 (0:00:01.254) 0:00:05.622 ********** 2026-04-05 00:51:40.599834 | orchestrator | changed: [testbed-node-4] 2026-04-05 00:51:40.599846 | orchestrator | changed: [testbed-node-5] 2026-04-05 00:51:40.599858 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:51:40.599870 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:51:40.599882 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:51:40.599894 | orchestrator | changed: [testbed-node-3] 2026-04-05 00:51:40.599906 | orchestrator | 2026-04-05 00:51:40.599918 | orchestrator | TASK [k3s_prereq : Add br_netfilter to /etc/modules-load.d/] ******************* 2026-04-05 00:51:40.599930 | orchestrator | Sunday 05 April 2026 00:46:54 +0000 (0:00:01.665) 0:00:07.287 ********** 2026-04-05 00:51:40.599941 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:51:40.599953 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:51:40.599965 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:51:40.599977 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:51:40.599989 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:51:40.600000 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:51:40.600056 | orchestrator | 2026-04-05 00:51:40.600070 | orchestrator | TASK [k3s_prereq : Load br_netfilter] ****************************************** 2026-04-05 00:51:40.600082 | orchestrator | Sunday 05 April 2026 00:46:55 +0000 (0:00:00.976) 0:00:08.263 ********** 2026-04-05 00:51:40.600093 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:51:40.600104 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:51:40.600114 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:51:40.600125 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:51:40.600136 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:51:40.600146 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:51:40.600157 | orchestrator | 2026-04-05 00:51:40.600168 | orchestrator | TASK [k3s_prereq : Set bridge-nf-call-iptables (just to be sure)] ************** 2026-04-05 00:51:40.600178 | orchestrator | Sunday 05 April 2026 00:46:56 +0000 (0:00:01.311) 0:00:09.575 ********** 2026-04-05 00:51:40.600189 | orchestrator | skipping: [testbed-node-3] => (item=net.bridge.bridge-nf-call-iptables)  2026-04-05 00:51:40.600201 | orchestrator | skipping: [testbed-node-3] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-04-05 00:51:40.600212 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:51:40.600235 | orchestrator | skipping: [testbed-node-4] => (item=net.bridge.bridge-nf-call-iptables)  2026-04-05 00:51:40.600245 | orchestrator | skipping: [testbed-node-4] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-04-05 00:51:40.600254 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:51:40.600264 | orchestrator | skipping: [testbed-node-5] => (item=net.bridge.bridge-nf-call-iptables)  2026-04-05 00:51:40.600273 | orchestrator | skipping: [testbed-node-5] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-04-05 00:51:40.600282 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:51:40.600292 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-iptables)  2026-04-05 00:51:40.600316 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-04-05 00:51:40.600326 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:51:40.600336 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-iptables)  2026-04-05 00:51:40.600345 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-04-05 00:51:40.600355 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:51:40.600365 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-iptables)  2026-04-05 00:51:40.600374 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-04-05 00:51:40.600384 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:51:40.600394 | orchestrator | 2026-04-05 00:51:40.600404 | orchestrator | TASK [k3s_prereq : Add /usr/local/bin to sudo secure_path] ********************* 2026-04-05 00:51:40.600413 | orchestrator | Sunday 05 April 2026 00:46:57 +0000 (0:00:01.057) 0:00:10.633 ********** 2026-04-05 00:51:40.600422 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:51:40.600431 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:51:40.600441 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:51:40.600491 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:51:40.600506 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:51:40.600516 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:51:40.600526 | orchestrator | 2026-04-05 00:51:40.600553 | orchestrator | TASK [k3s_download : Validating arguments against arg spec 'main' - Manage the downloading of K3S binaries] *** 2026-04-05 00:51:40.600565 | orchestrator | Sunday 05 April 2026 00:46:59 +0000 (0:00:01.993) 0:00:12.626 ********** 2026-04-05 00:51:40.600575 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:51:40.600587 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:51:40.600598 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:51:40.600609 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:51:40.600620 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:51:40.600631 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:51:40.600642 | orchestrator | 2026-04-05 00:51:40.600653 | orchestrator | TASK [k3s_download : Download k3s binary x64] ********************************** 2026-04-05 00:51:40.600664 | orchestrator | Sunday 05 April 2026 00:47:01 +0000 (0:00:01.346) 0:00:13.973 ********** 2026-04-05 00:51:40.600674 | orchestrator | changed: [testbed-node-4] 2026-04-05 00:51:40.600686 | orchestrator | changed: [testbed-node-5] 2026-04-05 00:51:40.600697 | orchestrator | changed: [testbed-node-3] 2026-04-05 00:51:40.600708 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:51:40.600717 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:51:40.600723 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:51:40.600729 | orchestrator | 2026-04-05 00:51:40.600735 | orchestrator | TASK [k3s_download : Download k3s binary arm64] ******************************** 2026-04-05 00:51:40.600741 | orchestrator | Sunday 05 April 2026 00:47:07 +0000 (0:00:06.401) 0:00:20.375 ********** 2026-04-05 00:51:40.600747 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:51:40.600753 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:51:40.600759 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:51:40.600765 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:51:40.600771 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:51:40.600777 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:51:40.600783 | orchestrator | 2026-04-05 00:51:40.600799 | orchestrator | TASK [k3s_download : Download k3s binary armhf] ******************************** 2026-04-05 00:51:40.600805 | orchestrator | Sunday 05 April 2026 00:47:09 +0000 (0:00:01.984) 0:00:22.359 ********** 2026-04-05 00:51:40.600811 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:51:40.600818 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:51:40.600824 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:51:40.600830 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:51:40.600836 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:51:40.600842 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:51:40.600848 | orchestrator | 2026-04-05 00:51:40.600854 | orchestrator | TASK [k3s_custom_registries : Validating arguments against arg spec 'main' - Configure the use of a custom container registry] *** 2026-04-05 00:51:40.600862 | orchestrator | Sunday 05 April 2026 00:47:13 +0000 (0:00:03.843) 0:00:26.202 ********** 2026-04-05 00:51:40.600868 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:51:40.600874 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:51:40.600880 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:51:40.600886 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:51:40.600892 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:51:40.600898 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:51:40.600904 | orchestrator | 2026-04-05 00:51:40.600910 | orchestrator | TASK [k3s_custom_registries : Create directory /etc/rancher/k3s] *************** 2026-04-05 00:51:40.600916 | orchestrator | Sunday 05 April 2026 00:47:14 +0000 (0:00:01.474) 0:00:27.677 ********** 2026-04-05 00:51:40.600922 | orchestrator | skipping: [testbed-node-3] => (item=rancher)  2026-04-05 00:51:40.600928 | orchestrator | skipping: [testbed-node-3] => (item=rancher/k3s)  2026-04-05 00:51:40.600934 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:51:40.600940 | orchestrator | skipping: [testbed-node-4] => (item=rancher)  2026-04-05 00:51:40.600946 | orchestrator | skipping: [testbed-node-4] => (item=rancher/k3s)  2026-04-05 00:51:40.600952 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:51:40.600958 | orchestrator | skipping: [testbed-node-5] => (item=rancher)  2026-04-05 00:51:40.600964 | orchestrator | skipping: [testbed-node-5] => (item=rancher/k3s)  2026-04-05 00:51:40.600970 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:51:40.600976 | orchestrator | skipping: [testbed-node-0] => (item=rancher)  2026-04-05 00:51:40.600982 | orchestrator | skipping: [testbed-node-0] => (item=rancher/k3s)  2026-04-05 00:51:40.600988 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:51:40.600995 | orchestrator | skipping: [testbed-node-1] => (item=rancher)  2026-04-05 00:51:40.601005 | orchestrator | skipping: [testbed-node-1] => (item=rancher/k3s)  2026-04-05 00:51:40.601015 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:51:40.601025 | orchestrator | skipping: [testbed-node-2] => (item=rancher)  2026-04-05 00:51:40.601035 | orchestrator | skipping: [testbed-node-2] => (item=rancher/k3s)  2026-04-05 00:51:40.601046 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:51:40.601057 | orchestrator | 2026-04-05 00:51:40.601067 | orchestrator | TASK [k3s_custom_registries : Insert registries into /etc/rancher/k3s/registries.yaml] *** 2026-04-05 00:51:40.601089 | orchestrator | Sunday 05 April 2026 00:47:15 +0000 (0:00:00.812) 0:00:28.490 ********** 2026-04-05 00:51:40.601099 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:51:40.601110 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:51:40.601117 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:51:40.601123 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:51:40.601129 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:51:40.601135 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:51:40.601141 | orchestrator | 2026-04-05 00:51:40.601147 | orchestrator | TASK [k3s_custom_registries : Remove /etc/rancher/k3s/registries.yaml when no registries configured] *** 2026-04-05 00:51:40.601153 | orchestrator | Sunday 05 April 2026 00:47:17 +0000 (0:00:01.254) 0:00:29.744 ********** 2026-04-05 00:51:40.601159 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:51:40.601165 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:51:40.601177 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:51:40.601183 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:51:40.601189 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:51:40.601195 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:51:40.601201 | orchestrator | 2026-04-05 00:51:40.601207 | orchestrator | PLAY [Deploy k3s master nodes] ************************************************* 2026-04-05 00:51:40.601213 | orchestrator | 2026-04-05 00:51:40.601219 | orchestrator | TASK [k3s_server : Validating arguments against arg spec 'main' - Setup k3s servers] *** 2026-04-05 00:51:40.601230 | orchestrator | Sunday 05 April 2026 00:47:18 +0000 (0:00:01.489) 0:00:31.233 ********** 2026-04-05 00:51:40.601237 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:51:40.601243 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:51:40.601249 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:51:40.601257 | orchestrator | 2026-04-05 00:51:40.601267 | orchestrator | TASK [k3s_server : Stop k3s-init] ********************************************** 2026-04-05 00:51:40.601277 | orchestrator | Sunday 05 April 2026 00:47:20 +0000 (0:00:02.041) 0:00:33.274 ********** 2026-04-05 00:51:40.601287 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:51:40.601298 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:51:40.601308 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:51:40.601318 | orchestrator | 2026-04-05 00:51:40.601328 | orchestrator | TASK [k3s_server : Stop k3s] *************************************************** 2026-04-05 00:51:40.601339 | orchestrator | Sunday 05 April 2026 00:47:21 +0000 (0:00:01.311) 0:00:34.586 ********** 2026-04-05 00:51:40.601350 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:51:40.601360 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:51:40.601370 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:51:40.601380 | orchestrator | 2026-04-05 00:51:40.601391 | orchestrator | TASK [k3s_server : Clean previous runs of k3s-init] **************************** 2026-04-05 00:51:40.601402 | orchestrator | Sunday 05 April 2026 00:47:23 +0000 (0:00:01.173) 0:00:35.760 ********** 2026-04-05 00:51:40.601412 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:51:40.601421 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:51:40.601431 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:51:40.601441 | orchestrator | 2026-04-05 00:51:40.601452 | orchestrator | TASK [k3s_server : Deploy K3s http_proxy conf] ********************************* 2026-04-05 00:51:40.601482 | orchestrator | Sunday 05 April 2026 00:47:25 +0000 (0:00:02.007) 0:00:37.768 ********** 2026-04-05 00:51:40.601493 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:51:40.601503 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:51:40.601513 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:51:40.601524 | orchestrator | 2026-04-05 00:51:40.601536 | orchestrator | TASK [k3s_server : Create /etc/rancher/k3s directory] ************************** 2026-04-05 00:51:40.601545 | orchestrator | Sunday 05 April 2026 00:47:25 +0000 (0:00:00.372) 0:00:38.141 ********** 2026-04-05 00:51:40.601556 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:51:40.601566 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:51:40.601577 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:51:40.601586 | orchestrator | 2026-04-05 00:51:40.601597 | orchestrator | TASK [k3s_server : Create custom resolv.conf for k3s] ************************** 2026-04-05 00:51:40.601606 | orchestrator | Sunday 05 April 2026 00:47:26 +0000 (0:00:01.312) 0:00:39.453 ********** 2026-04-05 00:51:40.601613 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:51:40.601619 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:51:40.601625 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:51:40.601631 | orchestrator | 2026-04-05 00:51:40.601637 | orchestrator | TASK [k3s_server : Deploy vip manifest] **************************************** 2026-04-05 00:51:40.601643 | orchestrator | Sunday 05 April 2026 00:47:28 +0000 (0:00:01.852) 0:00:41.306 ********** 2026-04-05 00:51:40.601649 | orchestrator | included: /ansible/roles/k3s_server/tasks/vip.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 00:51:40.601655 | orchestrator | 2026-04-05 00:51:40.601661 | orchestrator | TASK [k3s_server : Set _kube_vip_bgp_peers fact] ******************************* 2026-04-05 00:51:40.601667 | orchestrator | Sunday 05 April 2026 00:47:29 +0000 (0:00:01.304) 0:00:42.611 ********** 2026-04-05 00:51:40.601681 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:51:40.601687 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:51:40.601693 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:51:40.601699 | orchestrator | 2026-04-05 00:51:40.601705 | orchestrator | TASK [k3s_server : Create manifests directory on first master] ***************** 2026-04-05 00:51:40.601711 | orchestrator | Sunday 05 April 2026 00:47:34 +0000 (0:00:04.252) 0:00:46.864 ********** 2026-04-05 00:51:40.601717 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:51:40.601723 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:51:40.601729 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:51:40.601735 | orchestrator | 2026-04-05 00:51:40.601741 | orchestrator | TASK [k3s_server : Download vip rbac manifest to first master] ***************** 2026-04-05 00:51:40.601747 | orchestrator | Sunday 05 April 2026 00:47:35 +0000 (0:00:00.881) 0:00:47.745 ********** 2026-04-05 00:51:40.601753 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:51:40.601759 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:51:40.601765 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:51:40.601771 | orchestrator | 2026-04-05 00:51:40.601777 | orchestrator | TASK [k3s_server : Copy vip manifest to first master] ************************** 2026-04-05 00:51:40.601783 | orchestrator | Sunday 05 April 2026 00:47:36 +0000 (0:00:01.209) 0:00:48.955 ********** 2026-04-05 00:51:40.601789 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:51:40.601795 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:51:40.601801 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:51:40.601807 | orchestrator | 2026-04-05 00:51:40.601813 | orchestrator | TASK [k3s_server : Deploy metallb manifest] ************************************ 2026-04-05 00:51:40.601826 | orchestrator | Sunday 05 April 2026 00:47:38 +0000 (0:00:02.087) 0:00:51.043 ********** 2026-04-05 00:51:40.601833 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:51:40.601839 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:51:40.601845 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:51:40.601851 | orchestrator | 2026-04-05 00:51:40.601857 | orchestrator | TASK [k3s_server : Deploy kube-vip manifest] *********************************** 2026-04-05 00:51:40.601863 | orchestrator | Sunday 05 April 2026 00:47:38 +0000 (0:00:00.448) 0:00:51.491 ********** 2026-04-05 00:51:40.601869 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:51:40.601875 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:51:40.601881 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:51:40.601886 | orchestrator | 2026-04-05 00:51:40.601893 | orchestrator | TASK [k3s_server : Init cluster inside the transient k3s-init service] ********* 2026-04-05 00:51:40.601902 | orchestrator | Sunday 05 April 2026 00:47:39 +0000 (0:00:00.415) 0:00:51.906 ********** 2026-04-05 00:51:40.601912 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:51:40.601921 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:51:40.601931 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:51:40.601942 | orchestrator | 2026-04-05 00:51:40.601953 | orchestrator | TASK [k3s_server : Detect Kubernetes version for label compatibility] ********** 2026-04-05 00:51:40.601968 | orchestrator | Sunday 05 April 2026 00:47:41 +0000 (0:00:02.184) 0:00:54.091 ********** 2026-04-05 00:51:40.601979 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:51:40.601990 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:51:40.601999 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:51:40.602010 | orchestrator | 2026-04-05 00:51:40.602073 | orchestrator | TASK [k3s_server : Set node role label selector based on Kubernetes version] *** 2026-04-05 00:51:40.602086 | orchestrator | Sunday 05 April 2026 00:47:44 +0000 (0:00:02.809) 0:00:56.900 ********** 2026-04-05 00:51:40.602098 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:51:40.602109 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:51:40.602121 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:51:40.602132 | orchestrator | 2026-04-05 00:51:40.602143 | orchestrator | TASK [k3s_server : Verify that all nodes actually joined (check k3s-init.service if this fails)] *** 2026-04-05 00:51:40.602155 | orchestrator | Sunday 05 April 2026 00:47:44 +0000 (0:00:00.492) 0:00:57.393 ********** 2026-04-05 00:51:40.602166 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2026-04-05 00:51:40.602187 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2026-04-05 00:51:40.602199 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2026-04-05 00:51:40.602210 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2026-04-05 00:51:40.602221 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2026-04-05 00:51:40.602232 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2026-04-05 00:51:40.602242 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2026-04-05 00:51:40.602254 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2026-04-05 00:51:40.602265 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2026-04-05 00:51:40.602276 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2026-04-05 00:51:40.602287 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2026-04-05 00:51:40.602299 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2026-04-05 00:51:40.602311 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:51:40.602322 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:51:40.602334 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:51:40.602340 | orchestrator | 2026-04-05 00:51:40.602347 | orchestrator | TASK [k3s_server : Save logs of k3s-init.service] ****************************** 2026-04-05 00:51:40.602353 | orchestrator | Sunday 05 April 2026 00:48:27 +0000 (0:00:43.263) 0:01:40.657 ********** 2026-04-05 00:51:40.602359 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:51:40.602365 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:51:40.602371 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:51:40.602377 | orchestrator | 2026-04-05 00:51:40.602383 | orchestrator | TASK [k3s_server : Kill the temporary service used for initialization] ********* 2026-04-05 00:51:40.602389 | orchestrator | Sunday 05 April 2026 00:48:28 +0000 (0:00:00.503) 0:01:41.161 ********** 2026-04-05 00:51:40.602395 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:51:40.602401 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:51:40.602407 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:51:40.602413 | orchestrator | 2026-04-05 00:51:40.602419 | orchestrator | TASK [k3s_server : Copy K3s service file] ************************************** 2026-04-05 00:51:40.602425 | orchestrator | Sunday 05 April 2026 00:48:29 +0000 (0:00:00.980) 0:01:42.141 ********** 2026-04-05 00:51:40.602431 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:51:40.602437 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:51:40.602443 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:51:40.602449 | orchestrator | 2026-04-05 00:51:40.602478 | orchestrator | TASK [k3s_server : Enable and check K3s service] ******************************* 2026-04-05 00:51:40.602485 | orchestrator | Sunday 05 April 2026 00:48:30 +0000 (0:00:01.170) 0:01:43.312 ********** 2026-04-05 00:51:40.602491 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:51:40.602497 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:51:40.602503 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:51:40.602509 | orchestrator | 2026-04-05 00:51:40.602515 | orchestrator | TASK [k3s_server : Wait for node-token] **************************************** 2026-04-05 00:51:40.602527 | orchestrator | Sunday 05 April 2026 00:48:54 +0000 (0:00:23.529) 0:02:06.841 ********** 2026-04-05 00:51:40.602534 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:51:40.602540 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:51:40.602546 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:51:40.602552 | orchestrator | 2026-04-05 00:51:40.602558 | orchestrator | TASK [k3s_server : Register node-token file access mode] *********************** 2026-04-05 00:51:40.602564 | orchestrator | Sunday 05 April 2026 00:48:54 +0000 (0:00:00.745) 0:02:07.587 ********** 2026-04-05 00:51:40.602570 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:51:40.602576 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:51:40.602582 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:51:40.602588 | orchestrator | 2026-04-05 00:51:40.602594 | orchestrator | TASK [k3s_server : Change file access node-token] ****************************** 2026-04-05 00:51:40.602605 | orchestrator | Sunday 05 April 2026 00:48:56 +0000 (0:00:01.188) 0:02:08.775 ********** 2026-04-05 00:51:40.602611 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:51:40.602617 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:51:40.602623 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:51:40.602629 | orchestrator | 2026-04-05 00:51:40.602635 | orchestrator | TASK [k3s_server : Read node-token from master] ******************************** 2026-04-05 00:51:40.602641 | orchestrator | Sunday 05 April 2026 00:48:56 +0000 (0:00:00.918) 0:02:09.693 ********** 2026-04-05 00:51:40.602647 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:51:40.602653 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:51:40.602659 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:51:40.602665 | orchestrator | 2026-04-05 00:51:40.602671 | orchestrator | TASK [k3s_server : Store Master node-token] ************************************ 2026-04-05 00:51:40.602677 | orchestrator | Sunday 05 April 2026 00:48:57 +0000 (0:00:00.885) 0:02:10.579 ********** 2026-04-05 00:51:40.602683 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:51:40.602689 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:51:40.602695 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:51:40.602701 | orchestrator | 2026-04-05 00:51:40.602707 | orchestrator | TASK [k3s_server : Restore node-token file access] ***************************** 2026-04-05 00:51:40.602714 | orchestrator | Sunday 05 April 2026 00:48:58 +0000 (0:00:00.464) 0:02:11.044 ********** 2026-04-05 00:51:40.602720 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:51:40.602726 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:51:40.602732 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:51:40.602737 | orchestrator | 2026-04-05 00:51:40.602743 | orchestrator | TASK [k3s_server : Create directory .kube] ************************************* 2026-04-05 00:51:40.602750 | orchestrator | Sunday 05 April 2026 00:48:59 +0000 (0:00:00.921) 0:02:11.966 ********** 2026-04-05 00:51:40.602756 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:51:40.602762 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:51:40.602768 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:51:40.602774 | orchestrator | 2026-04-05 00:51:40.602780 | orchestrator | TASK [k3s_server : Copy config file to user home directory] ******************** 2026-04-05 00:51:40.602786 | orchestrator | Sunday 05 April 2026 00:48:59 +0000 (0:00:00.762) 0:02:12.729 ********** 2026-04-05 00:51:40.602792 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:51:40.602798 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:51:40.602804 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:51:40.602810 | orchestrator | 2026-04-05 00:51:40.602816 | orchestrator | TASK [k3s_server : Configure kubectl cluster to https://192.168.16.8:6443] ***** 2026-04-05 00:51:40.602822 | orchestrator | Sunday 05 April 2026 00:49:01 +0000 (0:00:01.061) 0:02:13.790 ********** 2026-04-05 00:51:40.602828 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:51:40.602834 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:51:40.602840 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:51:40.602846 | orchestrator | 2026-04-05 00:51:40.602852 | orchestrator | TASK [k3s_server : Create kubectl symlink] ************************************* 2026-04-05 00:51:40.602858 | orchestrator | Sunday 05 April 2026 00:49:02 +0000 (0:00:01.031) 0:02:14.822 ********** 2026-04-05 00:51:40.602869 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:51:40.602875 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:51:40.602881 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:51:40.602887 | orchestrator | 2026-04-05 00:51:40.602893 | orchestrator | TASK [k3s_server : Create crictl symlink] ************************************** 2026-04-05 00:51:40.602899 | orchestrator | Sunday 05 April 2026 00:49:02 +0000 (0:00:00.525) 0:02:15.347 ********** 2026-04-05 00:51:40.602905 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:51:40.602911 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:51:40.602917 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:51:40.602923 | orchestrator | 2026-04-05 00:51:40.602929 | orchestrator | TASK [k3s_server : Get contents of manifests folder] *************************** 2026-04-05 00:51:40.602935 | orchestrator | Sunday 05 April 2026 00:49:02 +0000 (0:00:00.289) 0:02:15.636 ********** 2026-04-05 00:51:40.602941 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:51:40.602947 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:51:40.602953 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:51:40.602959 | orchestrator | 2026-04-05 00:51:40.602965 | orchestrator | TASK [k3s_server : Get sub dirs of manifests folder] *************************** 2026-04-05 00:51:40.602972 | orchestrator | Sunday 05 April 2026 00:49:03 +0000 (0:00:00.616) 0:02:16.253 ********** 2026-04-05 00:51:40.602977 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:51:40.602984 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:51:40.602990 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:51:40.602996 | orchestrator | 2026-04-05 00:51:40.603002 | orchestrator | TASK [k3s_server : Remove manifests and folders that are only needed for bootstrapping cluster so k3s doesn't auto apply on start] *** 2026-04-05 00:51:40.603008 | orchestrator | Sunday 05 April 2026 00:49:04 +0000 (0:00:00.645) 0:02:16.898 ********** 2026-04-05 00:51:40.603014 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2026-04-05 00:51:40.603025 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2026-04-05 00:51:40.603031 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2026-04-05 00:51:40.603037 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2026-04-05 00:51:40.603043 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2026-04-05 00:51:40.603049 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2026-04-05 00:51:40.603055 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2026-04-05 00:51:40.603062 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2026-04-05 00:51:40.603068 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2026-04-05 00:51:40.603074 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/vip.yaml) 2026-04-05 00:51:40.603084 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2026-04-05 00:51:40.603090 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2026-04-05 00:51:40.603096 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2026-04-05 00:51:40.603102 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/vip-rbac.yaml) 2026-04-05 00:51:40.603108 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2026-04-05 00:51:40.603114 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2026-04-05 00:51:40.603120 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2026-04-05 00:51:40.603126 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2026-04-05 00:51:40.603142 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2026-04-05 00:51:40.603148 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2026-04-05 00:51:40.603154 | orchestrator | 2026-04-05 00:51:40.603160 | orchestrator | PLAY [Deploy k3s worker nodes] ************************************************* 2026-04-05 00:51:40.603166 | orchestrator | 2026-04-05 00:51:40.603172 | orchestrator | TASK [k3s_agent : Validating arguments against arg spec 'main' - Setup k3s agents] *** 2026-04-05 00:51:40.603178 | orchestrator | Sunday 05 April 2026 00:49:07 +0000 (0:00:03.686) 0:02:20.584 ********** 2026-04-05 00:51:40.603184 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:51:40.603190 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:51:40.603196 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:51:40.603202 | orchestrator | 2026-04-05 00:51:40.603208 | orchestrator | TASK [k3s_agent : Check if system is PXE-booted] ******************************* 2026-04-05 00:51:40.603214 | orchestrator | Sunday 05 April 2026 00:49:08 +0000 (0:00:00.308) 0:02:20.893 ********** 2026-04-05 00:51:40.603220 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:51:40.603226 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:51:40.603232 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:51:40.603239 | orchestrator | 2026-04-05 00:51:40.603245 | orchestrator | TASK [k3s_agent : Set fact for PXE-booted system] ****************************** 2026-04-05 00:51:40.603251 | orchestrator | Sunday 05 April 2026 00:49:08 +0000 (0:00:00.603) 0:02:21.497 ********** 2026-04-05 00:51:40.603257 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:51:40.603262 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:51:40.603268 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:51:40.603275 | orchestrator | 2026-04-05 00:51:40.603281 | orchestrator | TASK [k3s_agent : Include http_proxy configuration tasks] ********************** 2026-04-05 00:51:40.603287 | orchestrator | Sunday 05 April 2026 00:49:09 +0000 (0:00:00.431) 0:02:21.929 ********** 2026-04-05 00:51:40.603293 | orchestrator | included: /ansible/roles/k3s_agent/tasks/http_proxy.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-05 00:51:40.603299 | orchestrator | 2026-04-05 00:51:40.603305 | orchestrator | TASK [k3s_agent : Create k3s-node.service.d directory] ************************* 2026-04-05 00:51:40.603311 | orchestrator | Sunday 05 April 2026 00:49:09 +0000 (0:00:00.458) 0:02:22.387 ********** 2026-04-05 00:51:40.603317 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:51:40.603323 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:51:40.603329 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:51:40.603335 | orchestrator | 2026-04-05 00:51:40.603341 | orchestrator | TASK [k3s_agent : Copy K3s http_proxy conf file] ******************************* 2026-04-05 00:51:40.603347 | orchestrator | Sunday 05 April 2026 00:49:09 +0000 (0:00:00.268) 0:02:22.656 ********** 2026-04-05 00:51:40.603353 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:51:40.603359 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:51:40.603365 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:51:40.603371 | orchestrator | 2026-04-05 00:51:40.603377 | orchestrator | TASK [k3s_agent : Deploy K3s http_proxy conf] ********************************** 2026-04-05 00:51:40.603383 | orchestrator | Sunday 05 April 2026 00:49:10 +0000 (0:00:00.395) 0:02:23.051 ********** 2026-04-05 00:51:40.603389 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:51:40.603395 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:51:40.603401 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:51:40.603408 | orchestrator | 2026-04-05 00:51:40.603413 | orchestrator | TASK [k3s_agent : Create /etc/rancher/k3s directory] *************************** 2026-04-05 00:51:40.603420 | orchestrator | Sunday 05 April 2026 00:49:10 +0000 (0:00:00.273) 0:02:23.325 ********** 2026-04-05 00:51:40.603426 | orchestrator | changed: [testbed-node-3] 2026-04-05 00:51:40.603432 | orchestrator | changed: [testbed-node-4] 2026-04-05 00:51:40.603437 | orchestrator | changed: [testbed-node-5] 2026-04-05 00:51:40.603443 | orchestrator | 2026-04-05 00:51:40.603454 | orchestrator | TASK [k3s_agent : Create custom resolv.conf for k3s] *************************** 2026-04-05 00:51:40.603488 | orchestrator | Sunday 05 April 2026 00:49:11 +0000 (0:00:00.609) 0:02:23.934 ********** 2026-04-05 00:51:40.603495 | orchestrator | changed: [testbed-node-3] 2026-04-05 00:51:40.603501 | orchestrator | changed: [testbed-node-4] 2026-04-05 00:51:40.603507 | orchestrator | changed: [testbed-node-5] 2026-04-05 00:51:40.603513 | orchestrator | 2026-04-05 00:51:40.603520 | orchestrator | TASK [k3s_agent : Configure the k3s service] *********************************** 2026-04-05 00:51:40.603526 | orchestrator | Sunday 05 April 2026 00:49:12 +0000 (0:00:01.044) 0:02:24.979 ********** 2026-04-05 00:51:40.603532 | orchestrator | changed: [testbed-node-3] 2026-04-05 00:51:40.603538 | orchestrator | changed: [testbed-node-4] 2026-04-05 00:51:40.603544 | orchestrator | changed: [testbed-node-5] 2026-04-05 00:51:40.603550 | orchestrator | 2026-04-05 00:51:40.603557 | orchestrator | TASK [k3s_agent : Manage k3s service] ****************************************** 2026-04-05 00:51:40.603563 | orchestrator | Sunday 05 April 2026 00:49:13 +0000 (0:00:01.635) 0:02:26.615 ********** 2026-04-05 00:51:40.603569 | orchestrator | changed: [testbed-node-3] 2026-04-05 00:51:40.603575 | orchestrator | changed: [testbed-node-4] 2026-04-05 00:51:40.603581 | orchestrator | changed: [testbed-node-5] 2026-04-05 00:51:40.603587 | orchestrator | 2026-04-05 00:51:40.603597 | orchestrator | PLAY [Prepare kubeconfig file] ************************************************* 2026-04-05 00:51:40.603603 | orchestrator | 2026-04-05 00:51:40.603610 | orchestrator | TASK [Get home directory of operator user] ************************************* 2026-04-05 00:51:40.603616 | orchestrator | Sunday 05 April 2026 00:49:24 +0000 (0:00:10.267) 0:02:36.882 ********** 2026-04-05 00:51:40.603622 | orchestrator | ok: [testbed-manager] 2026-04-05 00:51:40.603628 | orchestrator | 2026-04-05 00:51:40.603634 | orchestrator | TASK [Create .kube directory] ************************************************** 2026-04-05 00:51:40.603640 | orchestrator | Sunday 05 April 2026 00:49:25 +0000 (0:00:00.899) 0:02:37.782 ********** 2026-04-05 00:51:40.603646 | orchestrator | changed: [testbed-manager] 2026-04-05 00:51:40.603652 | orchestrator | 2026-04-05 00:51:40.603659 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2026-04-05 00:51:40.603665 | orchestrator | Sunday 05 April 2026 00:49:25 +0000 (0:00:00.490) 0:02:38.272 ********** 2026-04-05 00:51:40.603671 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2026-04-05 00:51:40.603677 | orchestrator | 2026-04-05 00:51:40.603683 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2026-04-05 00:51:40.603689 | orchestrator | Sunday 05 April 2026 00:49:26 +0000 (0:00:00.597) 0:02:38.870 ********** 2026-04-05 00:51:40.603696 | orchestrator | changed: [testbed-manager] 2026-04-05 00:51:40.603702 | orchestrator | 2026-04-05 00:51:40.603708 | orchestrator | TASK [Change server address in the kubeconfig] ********************************* 2026-04-05 00:51:40.603714 | orchestrator | Sunday 05 April 2026 00:49:27 +0000 (0:00:01.336) 0:02:40.206 ********** 2026-04-05 00:51:40.603720 | orchestrator | changed: [testbed-manager] 2026-04-05 00:51:40.603726 | orchestrator | 2026-04-05 00:51:40.603732 | orchestrator | TASK [Make kubeconfig available for use inside the manager service] ************ 2026-04-05 00:51:40.603738 | orchestrator | Sunday 05 April 2026 00:49:28 +0000 (0:00:00.651) 0:02:40.858 ********** 2026-04-05 00:51:40.603745 | orchestrator | changed: [testbed-manager -> localhost] 2026-04-05 00:51:40.603751 | orchestrator | 2026-04-05 00:51:40.603757 | orchestrator | TASK [Change server address in the kubeconfig inside the manager service] ****** 2026-04-05 00:51:40.603763 | orchestrator | Sunday 05 April 2026 00:49:29 +0000 (0:00:01.858) 0:02:42.717 ********** 2026-04-05 00:51:40.603769 | orchestrator | changed: [testbed-manager -> localhost] 2026-04-05 00:51:40.603775 | orchestrator | 2026-04-05 00:51:40.603781 | orchestrator | TASK [Set KUBECONFIG environment variable] ************************************* 2026-04-05 00:51:40.603788 | orchestrator | Sunday 05 April 2026 00:49:31 +0000 (0:00:01.062) 0:02:43.780 ********** 2026-04-05 00:51:40.603794 | orchestrator | changed: [testbed-manager] 2026-04-05 00:51:40.603800 | orchestrator | 2026-04-05 00:51:40.603806 | orchestrator | TASK [Enable kubectl command line completion] ********************************** 2026-04-05 00:51:40.603817 | orchestrator | Sunday 05 April 2026 00:49:31 +0000 (0:00:00.508) 0:02:44.289 ********** 2026-04-05 00:51:40.603823 | orchestrator | changed: [testbed-manager] 2026-04-05 00:51:40.603829 | orchestrator | 2026-04-05 00:51:40.603836 | orchestrator | PLAY [Apply role kubectl] ****************************************************** 2026-04-05 00:51:40.603842 | orchestrator | 2026-04-05 00:51:40.603848 | orchestrator | TASK [kubectl : Gather variables for each operating system] ******************** 2026-04-05 00:51:40.603854 | orchestrator | Sunday 05 April 2026 00:49:32 +0000 (0:00:00.549) 0:02:44.838 ********** 2026-04-05 00:51:40.603860 | orchestrator | ok: [testbed-manager] 2026-04-05 00:51:40.603866 | orchestrator | 2026-04-05 00:51:40.603872 | orchestrator | TASK [kubectl : Include distribution specific install tasks] ******************* 2026-04-05 00:51:40.603878 | orchestrator | Sunday 05 April 2026 00:49:32 +0000 (0:00:00.159) 0:02:44.997 ********** 2026-04-05 00:51:40.603884 | orchestrator | included: /ansible/roles/kubectl/tasks/install-Debian-family.yml for testbed-manager 2026-04-05 00:51:40.603891 | orchestrator | 2026-04-05 00:51:40.603897 | orchestrator | TASK [kubectl : Remove old architecture-dependent repository] ****************** 2026-04-05 00:51:40.603903 | orchestrator | Sunday 05 April 2026 00:49:32 +0000 (0:00:00.287) 0:02:45.285 ********** 2026-04-05 00:51:40.603909 | orchestrator | ok: [testbed-manager] 2026-04-05 00:51:40.603915 | orchestrator | 2026-04-05 00:51:40.603921 | orchestrator | TASK [kubectl : Install apt-transport-https package] *************************** 2026-04-05 00:51:40.603927 | orchestrator | Sunday 05 April 2026 00:49:33 +0000 (0:00:01.391) 0:02:46.676 ********** 2026-04-05 00:51:40.603933 | orchestrator | ok: [testbed-manager] 2026-04-05 00:51:40.603939 | orchestrator | 2026-04-05 00:51:40.603946 | orchestrator | TASK [kubectl : Add repository gpg key] **************************************** 2026-04-05 00:51:40.603952 | orchestrator | Sunday 05 April 2026 00:49:35 +0000 (0:00:01.751) 0:02:48.428 ********** 2026-04-05 00:51:40.603958 | orchestrator | changed: [testbed-manager] 2026-04-05 00:51:40.603964 | orchestrator | 2026-04-05 00:51:40.603970 | orchestrator | TASK [kubectl : Set permissions of gpg key] ************************************ 2026-04-05 00:51:40.603976 | orchestrator | Sunday 05 April 2026 00:49:36 +0000 (0:00:00.882) 0:02:49.310 ********** 2026-04-05 00:51:40.603982 | orchestrator | ok: [testbed-manager] 2026-04-05 00:51:40.603988 | orchestrator | 2026-04-05 00:51:40.603998 | orchestrator | TASK [kubectl : Add repository Debian] ***************************************** 2026-04-05 00:51:40.604005 | orchestrator | Sunday 05 April 2026 00:49:37 +0000 (0:00:00.500) 0:02:49.811 ********** 2026-04-05 00:51:40.604011 | orchestrator | changed: [testbed-manager] 2026-04-05 00:51:40.604017 | orchestrator | 2026-04-05 00:51:40.604023 | orchestrator | TASK [kubectl : Install required packages] ************************************* 2026-04-05 00:51:40.604029 | orchestrator | Sunday 05 April 2026 00:49:46 +0000 (0:00:09.606) 0:02:59.417 ********** 2026-04-05 00:51:40.604035 | orchestrator | changed: [testbed-manager] 2026-04-05 00:51:40.604041 | orchestrator | 2026-04-05 00:51:40.604048 | orchestrator | TASK [kubectl : Remove kubectl symlink] **************************************** 2026-04-05 00:51:40.604054 | orchestrator | Sunday 05 April 2026 00:50:02 +0000 (0:00:15.547) 0:03:14.965 ********** 2026-04-05 00:51:40.604060 | orchestrator | ok: [testbed-manager] 2026-04-05 00:51:40.604066 | orchestrator | 2026-04-05 00:51:40.604072 | orchestrator | PLAY [Run post actions on master nodes] **************************************** 2026-04-05 00:51:40.604078 | orchestrator | 2026-04-05 00:51:40.604084 | orchestrator | TASK [k3s_server_post : Validating arguments against arg spec 'main' - Configure k3s cluster] *** 2026-04-05 00:51:40.604090 | orchestrator | Sunday 05 April 2026 00:50:02 +0000 (0:00:00.714) 0:03:15.680 ********** 2026-04-05 00:51:40.604096 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:51:40.604106 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:51:40.604113 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:51:40.604119 | orchestrator | 2026-04-05 00:51:40.604125 | orchestrator | TASK [k3s_server_post : Deploy calico] ***************************************** 2026-04-05 00:51:40.604131 | orchestrator | Sunday 05 April 2026 00:50:03 +0000 (0:00:00.676) 0:03:16.357 ********** 2026-04-05 00:51:40.604137 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:51:40.604148 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:51:40.604155 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:51:40.604161 | orchestrator | 2026-04-05 00:51:40.604168 | orchestrator | TASK [k3s_server_post : Deploy cilium] ***************************************** 2026-04-05 00:51:40.604174 | orchestrator | Sunday 05 April 2026 00:50:03 +0000 (0:00:00.333) 0:03:16.691 ********** 2026-04-05 00:51:40.604180 | orchestrator | included: /ansible/roles/k3s_server_post/tasks/cilium.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 00:51:40.604186 | orchestrator | 2026-04-05 00:51:40.604192 | orchestrator | TASK [k3s_server_post : Create tmp directory on first master] ****************** 2026-04-05 00:51:40.604198 | orchestrator | Sunday 05 April 2026 00:50:04 +0000 (0:00:00.663) 0:03:17.355 ********** 2026-04-05 00:51:40.604204 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-04-05 00:51:40.604211 | orchestrator | 2026-04-05 00:51:40.604217 | orchestrator | TASK [k3s_server_post : Wait for connectivity to kube VIP] ********************* 2026-04-05 00:51:40.604223 | orchestrator | Sunday 05 April 2026 00:50:05 +0000 (0:00:00.962) 0:03:18.317 ********** 2026-04-05 00:51:40.604229 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-05 00:51:40.604235 | orchestrator | 2026-04-05 00:51:40.604241 | orchestrator | TASK [k3s_server_post : Fail if kube VIP not reachable] ************************ 2026-04-05 00:51:40.604247 | orchestrator | Sunday 05 April 2026 00:50:07 +0000 (0:00:02.132) 0:03:20.450 ********** 2026-04-05 00:51:40.604253 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:51:40.604259 | orchestrator | 2026-04-05 00:51:40.604266 | orchestrator | TASK [k3s_server_post : Test for existing Cilium install] ********************** 2026-04-05 00:51:40.604272 | orchestrator | Sunday 05 April 2026 00:50:08 +0000 (0:00:00.351) 0:03:20.801 ********** 2026-04-05 00:51:40.604278 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-05 00:51:40.604284 | orchestrator | 2026-04-05 00:51:40.604290 | orchestrator | TASK [k3s_server_post : Check Cilium version] ********************************** 2026-04-05 00:51:40.604296 | orchestrator | Sunday 05 April 2026 00:50:10 +0000 (0:00:02.073) 0:03:22.875 ********** 2026-04-05 00:51:40.604302 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:51:40.604308 | orchestrator | 2026-04-05 00:51:40.604314 | orchestrator | TASK [k3s_server_post : Parse installed Cilium version] ************************ 2026-04-05 00:51:40.604320 | orchestrator | Sunday 05 April 2026 00:50:10 +0000 (0:00:00.130) 0:03:23.005 ********** 2026-04-05 00:51:40.604327 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:51:40.604333 | orchestrator | 2026-04-05 00:51:40.604339 | orchestrator | TASK [k3s_server_post : Determine if Cilium needs update] ********************** 2026-04-05 00:51:40.604345 | orchestrator | Sunday 05 April 2026 00:50:10 +0000 (0:00:00.103) 0:03:23.109 ********** 2026-04-05 00:51:40.604351 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:51:40.604357 | orchestrator | 2026-04-05 00:51:40.604364 | orchestrator | TASK [k3s_server_post : Log result] ******************************************** 2026-04-05 00:51:40.604370 | orchestrator | Sunday 05 April 2026 00:50:10 +0000 (0:00:00.112) 0:03:23.221 ********** 2026-04-05 00:51:40.604376 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:51:40.604382 | orchestrator | 2026-04-05 00:51:40.604388 | orchestrator | TASK [k3s_server_post : Install Cilium] **************************************** 2026-04-05 00:51:40.604394 | orchestrator | Sunday 05 April 2026 00:50:10 +0000 (0:00:00.115) 0:03:23.336 ********** 2026-04-05 00:51:40.604400 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-04-05 00:51:40.604406 | orchestrator | 2026-04-05 00:51:40.604412 | orchestrator | TASK [k3s_server_post : Wait for Cilium resources] ***************************** 2026-04-05 00:51:40.604419 | orchestrator | Sunday 05 April 2026 00:50:17 +0000 (0:00:06.488) 0:03:29.825 ********** 2026-04-05 00:51:40.604425 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/cilium-operator) 2026-04-05 00:51:40.604431 | orchestrator | FAILED - RETRYING: [testbed-node-0 -> localhost]: Wait for Cilium resources (30 retries left). 2026-04-05 00:51:40.604437 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=daemonset/cilium) 2026-04-05 00:51:40.604443 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/hubble-relay) 2026-04-05 00:51:40.604455 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/hubble-ui) 2026-04-05 00:51:40.604480 | orchestrator | 2026-04-05 00:51:40.604486 | orchestrator | TASK [k3s_server_post : Set _cilium_bgp_neighbors fact] ************************ 2026-04-05 00:51:40.604493 | orchestrator | Sunday 05 April 2026 00:51:00 +0000 (0:00:43.902) 0:04:13.728 ********** 2026-04-05 00:51:40.604503 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-05 00:51:40.604509 | orchestrator | 2026-04-05 00:51:40.604515 | orchestrator | TASK [k3s_server_post : Copy BGP manifests to first master] ******************** 2026-04-05 00:51:40.604522 | orchestrator | Sunday 05 April 2026 00:51:02 +0000 (0:00:01.506) 0:04:15.234 ********** 2026-04-05 00:51:40.604528 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-04-05 00:51:40.604534 | orchestrator | 2026-04-05 00:51:40.604540 | orchestrator | TASK [k3s_server_post : Apply BGP manifests] *********************************** 2026-04-05 00:51:40.604546 | orchestrator | Sunday 05 April 2026 00:51:04 +0000 (0:00:01.958) 0:04:17.193 ********** 2026-04-05 00:51:40.604552 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-04-05 00:51:40.604558 | orchestrator | 2026-04-05 00:51:40.604564 | orchestrator | TASK [k3s_server_post : Print error message if BGP manifests application fails] *** 2026-04-05 00:51:40.604570 | orchestrator | Sunday 05 April 2026 00:51:05 +0000 (0:00:01.282) 0:04:18.476 ********** 2026-04-05 00:51:40.604576 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:51:40.604582 | orchestrator | 2026-04-05 00:51:40.604589 | orchestrator | TASK [k3s_server_post : Test for BGP config resources] ************************* 2026-04-05 00:51:40.604595 | orchestrator | Sunday 05 April 2026 00:51:05 +0000 (0:00:00.183) 0:04:18.659 ********** 2026-04-05 00:51:40.604605 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=kubectl get CiliumBGPPeeringPolicy.cilium.io) 2026-04-05 00:51:40.604612 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=kubectl get CiliumLoadBalancerIPPool.cilium.io) 2026-04-05 00:51:40.604618 | orchestrator | 2026-04-05 00:51:40.604624 | orchestrator | TASK [k3s_server_post : Deploy metallb pool] *********************************** 2026-04-05 00:51:40.604630 | orchestrator | Sunday 05 April 2026 00:51:08 +0000 (0:00:02.599) 0:04:21.259 ********** 2026-04-05 00:51:40.604636 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:51:40.604642 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:51:40.604648 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:51:40.604654 | orchestrator | 2026-04-05 00:51:40.604660 | orchestrator | TASK [k3s_server_post : Remove tmp directory used for manifests] *************** 2026-04-05 00:51:40.604667 | orchestrator | Sunday 05 April 2026 00:51:08 +0000 (0:00:00.458) 0:04:21.717 ********** 2026-04-05 00:51:40.604673 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:51:40.604679 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:51:40.604685 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:51:40.604691 | orchestrator | 2026-04-05 00:51:40.604697 | orchestrator | PLAY [Apply role k9s] ********************************************************** 2026-04-05 00:51:40.604703 | orchestrator | 2026-04-05 00:51:40.604709 | orchestrator | TASK [k9s : Gather variables for each operating system] ************************ 2026-04-05 00:51:40.604716 | orchestrator | Sunday 05 April 2026 00:51:10 +0000 (0:00:01.295) 0:04:23.013 ********** 2026-04-05 00:51:40.604722 | orchestrator | ok: [testbed-manager] 2026-04-05 00:51:40.604728 | orchestrator | 2026-04-05 00:51:40.604734 | orchestrator | TASK [k9s : Include distribution specific install tasks] *********************** 2026-04-05 00:51:40.604741 | orchestrator | Sunday 05 April 2026 00:51:10 +0000 (0:00:00.229) 0:04:23.242 ********** 2026-04-05 00:51:40.604747 | orchestrator | included: /ansible/roles/k9s/tasks/install-Debian-family.yml for testbed-manager 2026-04-05 00:51:40.604753 | orchestrator | 2026-04-05 00:51:40.604759 | orchestrator | TASK [k9s : Install k9s packages] ********************************************** 2026-04-05 00:51:40.604766 | orchestrator | Sunday 05 April 2026 00:51:11 +0000 (0:00:00.540) 0:04:23.783 ********** 2026-04-05 00:51:40.604772 | orchestrator | changed: [testbed-manager] 2026-04-05 00:51:40.604778 | orchestrator | 2026-04-05 00:51:40.604785 | orchestrator | PLAY [Manage labels, annotations, and taints on all k3s nodes] ***************** 2026-04-05 00:51:40.604795 | orchestrator | 2026-04-05 00:51:40.604802 | orchestrator | TASK [Merge labels, annotations, and taints] *********************************** 2026-04-05 00:51:40.604808 | orchestrator | Sunday 05 April 2026 00:51:17 +0000 (0:00:06.855) 0:04:30.639 ********** 2026-04-05 00:51:40.604814 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:51:40.604820 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:51:40.604826 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:51:40.604832 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:51:40.604838 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:51:40.604844 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:51:40.604850 | orchestrator | 2026-04-05 00:51:40.604856 | orchestrator | TASK [Manage labels] *********************************************************** 2026-04-05 00:51:40.604862 | orchestrator | Sunday 05 April 2026 00:51:18 +0000 (0:00:00.748) 0:04:31.387 ********** 2026-04-05 00:51:40.604868 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2026-04-05 00:51:40.604874 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2026-04-05 00:51:40.604881 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2026-04-05 00:51:40.604887 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2026-04-05 00:51:40.604893 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2026-04-05 00:51:40.604899 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2026-04-05 00:51:40.604905 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2026-04-05 00:51:40.604911 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2026-04-05 00:51:40.604917 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2026-04-05 00:51:40.604923 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=openstack-control-plane=enabled) 2026-04-05 00:51:40.604930 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=openstack-control-plane=enabled) 2026-04-05 00:51:40.604936 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=openstack-control-plane=enabled) 2026-04-05 00:51:40.604947 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2026-04-05 00:51:40.604953 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2026-04-05 00:51:40.604959 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2026-04-05 00:51:40.604965 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2026-04-05 00:51:40.604971 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2026-04-05 00:51:40.604977 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2026-04-05 00:51:40.604983 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2026-04-05 00:51:40.604989 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2026-04-05 00:51:40.604995 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2026-04-05 00:51:40.605001 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2026-04-05 00:51:40.605008 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2026-04-05 00:51:40.605014 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2026-04-05 00:51:40.605020 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2026-04-05 00:51:40.605026 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2026-04-05 00:51:40.605032 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2026-04-05 00:51:40.605043 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2026-04-05 00:51:40.605049 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2026-04-05 00:51:40.605055 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2026-04-05 00:51:40.605061 | orchestrator | 2026-04-05 00:51:40.605067 | orchestrator | TASK [Manage annotations] ****************************************************** 2026-04-05 00:51:40.605073 | orchestrator | Sunday 05 April 2026 00:51:38 +0000 (0:00:19.936) 0:04:51.324 ********** 2026-04-05 00:51:40.605079 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:51:40.605607 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:51:40.605624 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:51:40.605630 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:51:40.605636 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:51:40.605642 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:51:40.605649 | orchestrator | 2026-04-05 00:51:40.605655 | orchestrator | TASK [Manage taints] *********************************************************** 2026-04-05 00:51:40.605661 | orchestrator | Sunday 05 April 2026 00:51:39 +0000 (0:00:00.554) 0:04:51.879 ********** 2026-04-05 00:51:40.605668 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:51:40.605674 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:51:40.605680 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:51:40.605686 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:51:40.605692 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:51:40.605697 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:51:40.605703 | orchestrator | 2026-04-05 00:51:40.605713 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-05 00:51:40.605720 | orchestrator | testbed-manager : ok=21  changed=11  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-05 00:51:40.605728 | orchestrator | testbed-node-0 : ok=50  changed=23  unreachable=0 failed=0 skipped=28  rescued=0 ignored=0 2026-04-05 00:51:40.605734 | orchestrator | testbed-node-1 : ok=38  changed=16  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2026-04-05 00:51:40.605740 | orchestrator | testbed-node-2 : ok=38  changed=16  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2026-04-05 00:51:40.605747 | orchestrator | testbed-node-3 : ok=16  changed=8  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-04-05 00:51:40.605753 | orchestrator | testbed-node-4 : ok=16  changed=8  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-04-05 00:51:40.605759 | orchestrator | testbed-node-5 : ok=16  changed=8  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-04-05 00:51:40.605765 | orchestrator | 2026-04-05 00:51:40.605771 | orchestrator | 2026-04-05 00:51:40.605777 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-05 00:51:40.605784 | orchestrator | Sunday 05 April 2026 00:51:39 +0000 (0:00:00.803) 0:04:52.682 ********** 2026-04-05 00:51:40.605790 | orchestrator | =============================================================================== 2026-04-05 00:51:40.605796 | orchestrator | k3s_server_post : Wait for Cilium resources ---------------------------- 43.90s 2026-04-05 00:51:40.605802 | orchestrator | k3s_server : Verify that all nodes actually joined (check k3s-init.service if this fails) -- 43.26s 2026-04-05 00:51:40.605809 | orchestrator | k3s_server : Enable and check K3s service ------------------------------ 23.53s 2026-04-05 00:51:40.605822 | orchestrator | Manage labels ---------------------------------------------------------- 19.94s 2026-04-05 00:51:40.605828 | orchestrator | kubectl : Install required packages ------------------------------------ 15.55s 2026-04-05 00:51:40.605841 | orchestrator | k3s_agent : Manage k3s service ----------------------------------------- 10.27s 2026-04-05 00:51:40.605847 | orchestrator | kubectl : Add repository Debian ----------------------------------------- 9.61s 2026-04-05 00:51:40.605853 | orchestrator | k9s : Install k9s packages ---------------------------------------------- 6.86s 2026-04-05 00:51:40.605859 | orchestrator | k3s_server_post : Install Cilium ---------------------------------------- 6.49s 2026-04-05 00:51:40.605865 | orchestrator | k3s_download : Download k3s binary x64 ---------------------------------- 6.40s 2026-04-05 00:51:40.605871 | orchestrator | k3s_server : Set _kube_vip_bgp_peers fact ------------------------------- 4.25s 2026-04-05 00:51:40.605877 | orchestrator | k3s_download : Download k3s binary armhf -------------------------------- 3.84s 2026-04-05 00:51:40.605884 | orchestrator | k3s_server : Remove manifests and folders that are only needed for bootstrapping cluster so k3s doesn't auto apply on start --- 3.69s 2026-04-05 00:51:40.605890 | orchestrator | k3s_server : Detect Kubernetes version for label compatibility ---------- 2.81s 2026-04-05 00:51:40.605896 | orchestrator | k3s_server_post : Test for BGP config resources ------------------------- 2.60s 2026-04-05 00:51:40.605902 | orchestrator | k3s_prereq : Enable IPv4 forwarding ------------------------------------- 2.27s 2026-04-05 00:51:40.605908 | orchestrator | k3s_server : Init cluster inside the transient k3s-init service --------- 2.18s 2026-04-05 00:51:40.605914 | orchestrator | k3s_server_post : Wait for connectivity to kube VIP --------------------- 2.13s 2026-04-05 00:51:40.605920 | orchestrator | k3s_server : Copy vip manifest to first master -------------------------- 2.09s 2026-04-05 00:51:40.605926 | orchestrator | k3s_server_post : Test for existing Cilium install ---------------------- 2.07s 2026-04-05 00:51:40.605933 | orchestrator | 2026-04-05 00:51:40 | INFO  | Task 36e09ef4-3478-4758-b2f7-0ed2ea2542bd is in state STARTED 2026-04-05 00:51:40.605939 | orchestrator | 2026-04-05 00:51:40 | INFO  | Task 1bc5b308-4c5f-49b2-92a5-1a49ed90e82d is in state STARTED 2026-04-05 00:51:40.605945 | orchestrator | 2026-04-05 00:51:40 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:51:43.649638 | orchestrator | 2026-04-05 00:51:43 | INFO  | Task fa00c608-297d-4fc6-9cb2-087513a79d46 is in state STARTED 2026-04-05 00:51:43.653499 | orchestrator | 2026-04-05 00:51:43 | INFO  | Task f0ba49c2-0d7c-4680-86ea-ac304f7a27d0 is in state STARTED 2026-04-05 00:51:43.655598 | orchestrator | 2026-04-05 00:51:43 | INFO  | Task bcbda999-be68-4903-8396-551ca0f1c657 is in state STARTED 2026-04-05 00:51:43.656306 | orchestrator | 2026-04-05 00:51:43 | INFO  | Task 7ec303bd-1e9d-4cf8-ba9d-f59ba9d6f993 is in state STARTED 2026-04-05 00:51:43.659324 | orchestrator | 2026-04-05 00:51:43 | INFO  | Task 36e09ef4-3478-4758-b2f7-0ed2ea2542bd is in state STARTED 2026-04-05 00:51:43.660523 | orchestrator | 2026-04-05 00:51:43 | INFO  | Task 1bc5b308-4c5f-49b2-92a5-1a49ed90e82d is in state STARTED 2026-04-05 00:51:43.660566 | orchestrator | 2026-04-05 00:51:43 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:51:46.717081 | orchestrator | 2026-04-05 00:51:46 | INFO  | Task fa00c608-297d-4fc6-9cb2-087513a79d46 is in state STARTED 2026-04-05 00:51:46.717181 | orchestrator | 2026-04-05 00:51:46 | INFO  | Task f0ba49c2-0d7c-4680-86ea-ac304f7a27d0 is in state STARTED 2026-04-05 00:51:46.717201 | orchestrator | 2026-04-05 00:51:46 | INFO  | Task bcbda999-be68-4903-8396-551ca0f1c657 is in state STARTED 2026-04-05 00:51:46.717218 | orchestrator | 2026-04-05 00:51:46 | INFO  | Task 7ec303bd-1e9d-4cf8-ba9d-f59ba9d6f993 is in state STARTED 2026-04-05 00:51:46.717234 | orchestrator | 2026-04-05 00:51:46 | INFO  | Task 36e09ef4-3478-4758-b2f7-0ed2ea2542bd is in state STARTED 2026-04-05 00:51:46.717251 | orchestrator | 2026-04-05 00:51:46 | INFO  | Task 1bc5b308-4c5f-49b2-92a5-1a49ed90e82d is in state STARTED 2026-04-05 00:51:46.717305 | orchestrator | 2026-04-05 00:51:46 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:51:49.754365 | orchestrator | 2026-04-05 00:51:49 | INFO  | Task fa00c608-297d-4fc6-9cb2-087513a79d46 is in state SUCCESS 2026-04-05 00:51:49.754529 | orchestrator | 2026-04-05 00:51:49 | INFO  | Task f0ba49c2-0d7c-4680-86ea-ac304f7a27d0 is in state STARTED 2026-04-05 00:51:49.759878 | orchestrator | 2026-04-05 00:51:49 | INFO  | Task bcbda999-be68-4903-8396-551ca0f1c657 is in state STARTED 2026-04-05 00:51:49.759960 | orchestrator | 2026-04-05 00:51:49 | INFO  | Task 7ec303bd-1e9d-4cf8-ba9d-f59ba9d6f993 is in state STARTED 2026-04-05 00:51:49.761163 | orchestrator | 2026-04-05 00:51:49 | INFO  | Task 36e09ef4-3478-4758-b2f7-0ed2ea2542bd is in state STARTED 2026-04-05 00:51:49.762390 | orchestrator | 2026-04-05 00:51:49 | INFO  | Task 1bc5b308-4c5f-49b2-92a5-1a49ed90e82d is in state STARTED 2026-04-05 00:51:49.762611 | orchestrator | 2026-04-05 00:51:49 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:51:52.838621 | orchestrator | 2026-04-05 00:51:52 | INFO  | Task f0ba49c2-0d7c-4680-86ea-ac304f7a27d0 is in state STARTED 2026-04-05 00:51:52.838712 | orchestrator | 2026-04-05 00:51:52 | INFO  | Task bcbda999-be68-4903-8396-551ca0f1c657 is in state STARTED 2026-04-05 00:51:52.838724 | orchestrator | 2026-04-05 00:51:52 | INFO  | Task 7ec303bd-1e9d-4cf8-ba9d-f59ba9d6f993 is in state STARTED 2026-04-05 00:51:52.838731 | orchestrator | 2026-04-05 00:51:52 | INFO  | Task 36e09ef4-3478-4758-b2f7-0ed2ea2542bd is in state STARTED 2026-04-05 00:51:52.838738 | orchestrator | 2026-04-05 00:51:52 | INFO  | Task 1bc5b308-4c5f-49b2-92a5-1a49ed90e82d is in state STARTED 2026-04-05 00:51:52.838745 | orchestrator | 2026-04-05 00:51:52 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:51:55.880998 | orchestrator | 2026-04-05 00:51:55 | INFO  | Task f0ba49c2-0d7c-4680-86ea-ac304f7a27d0 is in state STARTED 2026-04-05 00:51:55.881086 | orchestrator | 2026-04-05 00:51:55 | INFO  | Task bcbda999-be68-4903-8396-551ca0f1c657 is in state STARTED 2026-04-05 00:51:55.881348 | orchestrator | 2026-04-05 00:51:55 | INFO  | Task 7ec303bd-1e9d-4cf8-ba9d-f59ba9d6f993 is in state SUCCESS 2026-04-05 00:51:55.883121 | orchestrator | 2026-04-05 00:51:55 | INFO  | Task 36e09ef4-3478-4758-b2f7-0ed2ea2542bd is in state STARTED 2026-04-05 00:51:55.884214 | orchestrator | 2026-04-05 00:51:55 | INFO  | Task 1bc5b308-4c5f-49b2-92a5-1a49ed90e82d is in state STARTED 2026-04-05 00:51:55.884251 | orchestrator | 2026-04-05 00:51:55 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:51:58.922530 | orchestrator | 2026-04-05 00:51:58 | INFO  | Task f0ba49c2-0d7c-4680-86ea-ac304f7a27d0 is in state STARTED 2026-04-05 00:51:58.922602 | orchestrator | 2026-04-05 00:51:58 | INFO  | Task bcbda999-be68-4903-8396-551ca0f1c657 is in state STARTED 2026-04-05 00:51:58.923525 | orchestrator | 2026-04-05 00:51:58 | INFO  | Task 36e09ef4-3478-4758-b2f7-0ed2ea2542bd is in state STARTED 2026-04-05 00:51:58.924906 | orchestrator | 2026-04-05 00:51:58 | INFO  | Task 1bc5b308-4c5f-49b2-92a5-1a49ed90e82d is in state STARTED 2026-04-05 00:51:58.925026 | orchestrator | 2026-04-05 00:51:58 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:52:01.968346 | orchestrator | 2026-04-05 00:52:01 | INFO  | Task f0ba49c2-0d7c-4680-86ea-ac304f7a27d0 is in state STARTED 2026-04-05 00:52:01.968563 | orchestrator | 2026-04-05 00:52:01 | INFO  | Task bcbda999-be68-4903-8396-551ca0f1c657 is in state STARTED 2026-04-05 00:52:01.968584 | orchestrator | 2026-04-05 00:52:01 | INFO  | Task 36e09ef4-3478-4758-b2f7-0ed2ea2542bd is in state STARTED 2026-04-05 00:52:01.969032 | orchestrator | 2026-04-05 00:52:01 | INFO  | Task 1bc5b308-4c5f-49b2-92a5-1a49ed90e82d is in state STARTED 2026-04-05 00:52:01.969103 | orchestrator | 2026-04-05 00:52:01 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:52:05.001277 | orchestrator | 2026-04-05 00:52:05 | INFO  | Task f0ba49c2-0d7c-4680-86ea-ac304f7a27d0 is in state STARTED 2026-04-05 00:52:05.002302 | orchestrator | 2026-04-05 00:52:05 | INFO  | Task bcbda999-be68-4903-8396-551ca0f1c657 is in state STARTED 2026-04-05 00:52:05.003565 | orchestrator | 2026-04-05 00:52:05 | INFO  | Task 36e09ef4-3478-4758-b2f7-0ed2ea2542bd is in state STARTED 2026-04-05 00:52:05.004582 | orchestrator | 2026-04-05 00:52:05 | INFO  | Task 1bc5b308-4c5f-49b2-92a5-1a49ed90e82d is in state STARTED 2026-04-05 00:52:05.005036 | orchestrator | 2026-04-05 00:52:05 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:52:08.055876 | orchestrator | 2026-04-05 00:52:08 | INFO  | Task f0ba49c2-0d7c-4680-86ea-ac304f7a27d0 is in state STARTED 2026-04-05 00:52:08.060971 | orchestrator | 2026-04-05 00:52:08 | INFO  | Task bcbda999-be68-4903-8396-551ca0f1c657 is in state STARTED 2026-04-05 00:52:08.064128 | orchestrator | 2026-04-05 00:52:08 | INFO  | Task 36e09ef4-3478-4758-b2f7-0ed2ea2542bd is in state STARTED 2026-04-05 00:52:08.067657 | orchestrator | 2026-04-05 00:52:08 | INFO  | Task 1bc5b308-4c5f-49b2-92a5-1a49ed90e82d is in state STARTED 2026-04-05 00:52:08.067698 | orchestrator | 2026-04-05 00:52:08 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:52:11.100416 | orchestrator | 2026-04-05 00:52:11 | INFO  | Task f0ba49c2-0d7c-4680-86ea-ac304f7a27d0 is in state STARTED 2026-04-05 00:52:11.101328 | orchestrator | 2026-04-05 00:52:11 | INFO  | Task bcbda999-be68-4903-8396-551ca0f1c657 is in state STARTED 2026-04-05 00:52:11.102314 | orchestrator | 2026-04-05 00:52:11 | INFO  | Task 36e09ef4-3478-4758-b2f7-0ed2ea2542bd is in state STARTED 2026-04-05 00:52:11.103596 | orchestrator | 2026-04-05 00:52:11 | INFO  | Task 1bc5b308-4c5f-49b2-92a5-1a49ed90e82d is in state STARTED 2026-04-05 00:52:11.103794 | orchestrator | 2026-04-05 00:52:11 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:52:14.136501 | orchestrator | 2026-04-05 00:52:14 | INFO  | Task f0ba49c2-0d7c-4680-86ea-ac304f7a27d0 is in state STARTED 2026-04-05 00:52:14.136934 | orchestrator | 2026-04-05 00:52:14 | INFO  | Task bcbda999-be68-4903-8396-551ca0f1c657 is in state STARTED 2026-04-05 00:52:14.138481 | orchestrator | 2026-04-05 00:52:14 | INFO  | Task 36e09ef4-3478-4758-b2f7-0ed2ea2542bd is in state STARTED 2026-04-05 00:52:14.140043 | orchestrator | 2026-04-05 00:52:14 | INFO  | Task 1bc5b308-4c5f-49b2-92a5-1a49ed90e82d is in state STARTED 2026-04-05 00:52:14.140080 | orchestrator | 2026-04-05 00:52:14 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:52:17.188964 | orchestrator | 2026-04-05 00:52:17 | INFO  | Task f0ba49c2-0d7c-4680-86ea-ac304f7a27d0 is in state STARTED 2026-04-05 00:52:17.191745 | orchestrator | 2026-04-05 00:52:17 | INFO  | Task bcbda999-be68-4903-8396-551ca0f1c657 is in state STARTED 2026-04-05 00:52:17.193573 | orchestrator | 2026-04-05 00:52:17 | INFO  | Task 36e09ef4-3478-4758-b2f7-0ed2ea2542bd is in state STARTED 2026-04-05 00:52:17.195468 | orchestrator | 2026-04-05 00:52:17 | INFO  | Task 1bc5b308-4c5f-49b2-92a5-1a49ed90e82d is in state STARTED 2026-04-05 00:52:17.195634 | orchestrator | 2026-04-05 00:52:17 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:52:20.236116 | orchestrator | 2026-04-05 00:52:20 | INFO  | Task f0ba49c2-0d7c-4680-86ea-ac304f7a27d0 is in state STARTED 2026-04-05 00:52:20.237164 | orchestrator | 2026-04-05 00:52:20 | INFO  | Task bcbda999-be68-4903-8396-551ca0f1c657 is in state STARTED 2026-04-05 00:52:20.239314 | orchestrator | 2026-04-05 00:52:20 | INFO  | Task 36e09ef4-3478-4758-b2f7-0ed2ea2542bd is in state STARTED 2026-04-05 00:52:20.242597 | orchestrator | 2026-04-05 00:52:20 | INFO  | Task 1bc5b308-4c5f-49b2-92a5-1a49ed90e82d is in state STARTED 2026-04-05 00:52:20.242668 | orchestrator | 2026-04-05 00:52:20 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:52:23.282289 | orchestrator | 2026-04-05 00:52:23 | INFO  | Task f0ba49c2-0d7c-4680-86ea-ac304f7a27d0 is in state STARTED 2026-04-05 00:52:23.287720 | orchestrator | 2026-04-05 00:52:23 | INFO  | Task bcbda999-be68-4903-8396-551ca0f1c657 is in state STARTED 2026-04-05 00:52:23.289125 | orchestrator | 2026-04-05 00:52:23 | INFO  | Task 36e09ef4-3478-4758-b2f7-0ed2ea2542bd is in state STARTED 2026-04-05 00:52:23.291641 | orchestrator | 2026-04-05 00:52:23 | INFO  | Task 1bc5b308-4c5f-49b2-92a5-1a49ed90e82d is in state STARTED 2026-04-05 00:52:23.291911 | orchestrator | 2026-04-05 00:52:23 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:52:26.337987 | orchestrator | 2026-04-05 00:52:26 | INFO  | Task f0ba49c2-0d7c-4680-86ea-ac304f7a27d0 is in state STARTED 2026-04-05 00:52:26.339599 | orchestrator | 2026-04-05 00:52:26 | INFO  | Task bcbda999-be68-4903-8396-551ca0f1c657 is in state STARTED 2026-04-05 00:52:26.343837 | orchestrator | 2026-04-05 00:52:26.343888 | orchestrator | 2026-04-05 00:52:26.343894 | orchestrator | PLAY [Copy kubeconfig to the configuration repository] ************************* 2026-04-05 00:52:26.343899 | orchestrator | 2026-04-05 00:52:26.343904 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2026-04-05 00:52:26.343909 | orchestrator | Sunday 05 April 2026 00:51:45 +0000 (0:00:00.290) 0:00:00.290 ********** 2026-04-05 00:52:26.343925 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2026-04-05 00:52:26.343930 | orchestrator | 2026-04-05 00:52:26.343934 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2026-04-05 00:52:26.343938 | orchestrator | Sunday 05 April 2026 00:51:46 +0000 (0:00:01.276) 0:00:01.567 ********** 2026-04-05 00:52:26.343943 | orchestrator | changed: [testbed-manager] 2026-04-05 00:52:26.343947 | orchestrator | 2026-04-05 00:52:26.343952 | orchestrator | TASK [Change server address in the kubeconfig file] **************************** 2026-04-05 00:52:26.343956 | orchestrator | Sunday 05 April 2026 00:51:48 +0000 (0:00:01.765) 0:00:03.332 ********** 2026-04-05 00:52:26.343960 | orchestrator | changed: [testbed-manager] 2026-04-05 00:52:26.343964 | orchestrator | 2026-04-05 00:52:26.343969 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-05 00:52:26.343973 | orchestrator | testbed-manager : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-05 00:52:26.343979 | orchestrator | 2026-04-05 00:52:26.343983 | orchestrator | 2026-04-05 00:52:26.343987 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-05 00:52:26.343991 | orchestrator | Sunday 05 April 2026 00:51:48 +0000 (0:00:00.501) 0:00:03.834 ********** 2026-04-05 00:52:26.343995 | orchestrator | =============================================================================== 2026-04-05 00:52:26.343999 | orchestrator | Write kubeconfig file --------------------------------------------------- 1.77s 2026-04-05 00:52:26.344005 | orchestrator | Get kubeconfig file ----------------------------------------------------- 1.28s 2026-04-05 00:52:26.344013 | orchestrator | Change server address in the kubeconfig file ---------------------------- 0.50s 2026-04-05 00:52:26.344017 | orchestrator | 2026-04-05 00:52:26.344022 | orchestrator | 2026-04-05 00:52:26.344026 | orchestrator | PLAY [Prepare kubeconfig file] ************************************************* 2026-04-05 00:52:26.344030 | orchestrator | 2026-04-05 00:52:26.344035 | orchestrator | TASK [Get home directory of operator user] ************************************* 2026-04-05 00:52:26.344039 | orchestrator | Sunday 05 April 2026 00:51:44 +0000 (0:00:00.311) 0:00:00.311 ********** 2026-04-05 00:52:26.344043 | orchestrator | ok: [testbed-manager] 2026-04-05 00:52:26.344062 | orchestrator | 2026-04-05 00:52:26.344066 | orchestrator | TASK [Create .kube directory] ************************************************** 2026-04-05 00:52:26.344070 | orchestrator | Sunday 05 April 2026 00:51:45 +0000 (0:00:00.941) 0:00:01.253 ********** 2026-04-05 00:52:26.344074 | orchestrator | ok: [testbed-manager] 2026-04-05 00:52:26.344079 | orchestrator | 2026-04-05 00:52:26.344083 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2026-04-05 00:52:26.344087 | orchestrator | Sunday 05 April 2026 00:51:46 +0000 (0:00:00.729) 0:00:01.982 ********** 2026-04-05 00:52:26.344091 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2026-04-05 00:52:26.344095 | orchestrator | 2026-04-05 00:52:26.344099 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2026-04-05 00:52:26.344103 | orchestrator | Sunday 05 April 2026 00:51:47 +0000 (0:00:01.167) 0:00:03.150 ********** 2026-04-05 00:52:26.344107 | orchestrator | changed: [testbed-manager] 2026-04-05 00:52:26.344111 | orchestrator | 2026-04-05 00:52:26.344115 | orchestrator | TASK [Change server address in the kubeconfig] ********************************* 2026-04-05 00:52:26.344119 | orchestrator | Sunday 05 April 2026 00:51:48 +0000 (0:00:01.327) 0:00:04.477 ********** 2026-04-05 00:52:26.344123 | orchestrator | changed: [testbed-manager] 2026-04-05 00:52:26.344127 | orchestrator | 2026-04-05 00:52:26.344131 | orchestrator | TASK [Make kubeconfig available for use inside the manager service] ************ 2026-04-05 00:52:26.344135 | orchestrator | Sunday 05 April 2026 00:51:49 +0000 (0:00:00.651) 0:00:05.129 ********** 2026-04-05 00:52:26.344139 | orchestrator | changed: [testbed-manager -> localhost] 2026-04-05 00:52:26.344144 | orchestrator | 2026-04-05 00:52:26.344148 | orchestrator | TASK [Change server address in the kubeconfig inside the manager service] ****** 2026-04-05 00:52:26.344152 | orchestrator | Sunday 05 April 2026 00:51:51 +0000 (0:00:02.151) 0:00:07.281 ********** 2026-04-05 00:52:26.344156 | orchestrator | changed: [testbed-manager -> localhost] 2026-04-05 00:52:26.344160 | orchestrator | 2026-04-05 00:52:26.344164 | orchestrator | TASK [Set KUBECONFIG environment variable] ************************************* 2026-04-05 00:52:26.344168 | orchestrator | Sunday 05 April 2026 00:51:52 +0000 (0:00:01.352) 0:00:08.634 ********** 2026-04-05 00:52:26.344172 | orchestrator | ok: [testbed-manager] 2026-04-05 00:52:26.344177 | orchestrator | 2026-04-05 00:52:26.344181 | orchestrator | TASK [Enable kubectl command line completion] ********************************** 2026-04-05 00:52:26.344185 | orchestrator | Sunday 05 April 2026 00:51:53 +0000 (0:00:00.650) 0:00:09.284 ********** 2026-04-05 00:52:26.344189 | orchestrator | ok: [testbed-manager] 2026-04-05 00:52:26.344193 | orchestrator | 2026-04-05 00:52:26.344197 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-05 00:52:26.344201 | orchestrator | testbed-manager : ok=9  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-05 00:52:26.344205 | orchestrator | 2026-04-05 00:52:26.344209 | orchestrator | 2026-04-05 00:52:26.344213 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-05 00:52:26.344222 | orchestrator | Sunday 05 April 2026 00:51:53 +0000 (0:00:00.341) 0:00:09.626 ********** 2026-04-05 00:52:26.344226 | orchestrator | =============================================================================== 2026-04-05 00:52:26.344230 | orchestrator | Make kubeconfig available for use inside the manager service ------------ 2.15s 2026-04-05 00:52:26.344234 | orchestrator | Change server address in the kubeconfig inside the manager service ------ 1.35s 2026-04-05 00:52:26.344239 | orchestrator | Write kubeconfig file --------------------------------------------------- 1.33s 2026-04-05 00:52:26.344251 | orchestrator | Get kubeconfig file ----------------------------------------------------- 1.17s 2026-04-05 00:52:26.344256 | orchestrator | Get home directory of operator user ------------------------------------- 0.94s 2026-04-05 00:52:26.344264 | orchestrator | Create .kube directory -------------------------------------------------- 0.73s 2026-04-05 00:52:26.344268 | orchestrator | Change server address in the kubeconfig --------------------------------- 0.65s 2026-04-05 00:52:26.344277 | orchestrator | Set KUBECONFIG environment variable ------------------------------------- 0.65s 2026-04-05 00:52:26.344284 | orchestrator | Enable kubectl command line completion ---------------------------------- 0.34s 2026-04-05 00:52:26.344289 | orchestrator | 2026-04-05 00:52:26.344293 | orchestrator | 2026-04-05 00:52:26.344297 | orchestrator | PLAY [Set kolla_action_rabbitmq] *********************************************** 2026-04-05 00:52:26.344301 | orchestrator | 2026-04-05 00:52:26.344305 | orchestrator | TASK [Inform the user about the following task] ******************************** 2026-04-05 00:52:26.344309 | orchestrator | Sunday 05 April 2026 00:49:55 +0000 (0:00:00.170) 0:00:00.170 ********** 2026-04-05 00:52:26.344313 | orchestrator | ok: [localhost] => { 2026-04-05 00:52:26.344319 | orchestrator |  "msg": "The task 'Check RabbitMQ service' fails if the RabbitMQ service has not yet been deployed. This is fine." 2026-04-05 00:52:26.344324 | orchestrator | } 2026-04-05 00:52:26.344328 | orchestrator | 2026-04-05 00:52:26.344332 | orchestrator | TASK [Check RabbitMQ service] ************************************************** 2026-04-05 00:52:26.344336 | orchestrator | Sunday 05 April 2026 00:49:55 +0000 (0:00:00.086) 0:00:00.257 ********** 2026-04-05 00:52:26.344341 | orchestrator | fatal: [localhost]: FAILED! => {"changed": false, "elapsed": 2, "msg": "Timeout when waiting for search string RabbitMQ Management in 192.168.16.9:15672"} 2026-04-05 00:52:26.344347 | orchestrator | ...ignoring 2026-04-05 00:52:26.344352 | orchestrator | 2026-04-05 00:52:26.344356 | orchestrator | TASK [Set kolla_action_rabbitmq = upgrade if RabbitMQ is already running] ****** 2026-04-05 00:52:26.344360 | orchestrator | Sunday 05 April 2026 00:50:00 +0000 (0:00:04.399) 0:00:04.656 ********** 2026-04-05 00:52:26.344364 | orchestrator | skipping: [localhost] 2026-04-05 00:52:26.344368 | orchestrator | 2026-04-05 00:52:26.344372 | orchestrator | TASK [Set kolla_action_rabbitmq = kolla_action_ng] ***************************** 2026-04-05 00:52:26.344376 | orchestrator | Sunday 05 April 2026 00:50:00 +0000 (0:00:00.183) 0:00:04.839 ********** 2026-04-05 00:52:26.344380 | orchestrator | ok: [localhost] 2026-04-05 00:52:26.344384 | orchestrator | 2026-04-05 00:52:26.344388 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-05 00:52:26.344393 | orchestrator | 2026-04-05 00:52:26.344397 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-05 00:52:26.344401 | orchestrator | Sunday 05 April 2026 00:50:00 +0000 (0:00:00.500) 0:00:05.340 ********** 2026-04-05 00:52:26.344405 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:52:26.344438 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:52:26.344443 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:52:26.344448 | orchestrator | 2026-04-05 00:52:26.344456 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-05 00:52:26.344462 | orchestrator | Sunday 05 April 2026 00:50:01 +0000 (0:00:00.626) 0:00:05.967 ********** 2026-04-05 00:52:26.344467 | orchestrator | ok: [testbed-node-0] => (item=enable_rabbitmq_True) 2026-04-05 00:52:26.344471 | orchestrator | ok: [testbed-node-1] => (item=enable_rabbitmq_True) 2026-04-05 00:52:26.344476 | orchestrator | ok: [testbed-node-2] => (item=enable_rabbitmq_True) 2026-04-05 00:52:26.344481 | orchestrator | 2026-04-05 00:52:26.344486 | orchestrator | PLAY [Apply role rabbitmq] ***************************************************** 2026-04-05 00:52:26.344491 | orchestrator | 2026-04-05 00:52:26.344496 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2026-04-05 00:52:26.344500 | orchestrator | Sunday 05 April 2026 00:50:02 +0000 (0:00:00.891) 0:00:06.858 ********** 2026-04-05 00:52:26.344505 | orchestrator | included: /ansible/roles/rabbitmq/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 00:52:26.344511 | orchestrator | 2026-04-05 00:52:26.344515 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2026-04-05 00:52:26.344520 | orchestrator | Sunday 05 April 2026 00:50:03 +0000 (0:00:01.461) 0:00:08.320 ********** 2026-04-05 00:52:26.344525 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:52:26.344530 | orchestrator | 2026-04-05 00:52:26.344535 | orchestrator | TASK [rabbitmq : Get current RabbitMQ version] ********************************* 2026-04-05 00:52:26.344543 | orchestrator | Sunday 05 April 2026 00:50:05 +0000 (0:00:01.398) 0:00:09.718 ********** 2026-04-05 00:52:26.344548 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:52:26.344553 | orchestrator | 2026-04-05 00:52:26.344557 | orchestrator | TASK [rabbitmq : Get new RabbitMQ version] ************************************* 2026-04-05 00:52:26.344562 | orchestrator | Sunday 05 April 2026 00:50:05 +0000 (0:00:00.403) 0:00:10.122 ********** 2026-04-05 00:52:26.344567 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:52:26.344572 | orchestrator | 2026-04-05 00:52:26.344577 | orchestrator | TASK [rabbitmq : Check if running RabbitMQ is at most one version behind] ****** 2026-04-05 00:52:26.344582 | orchestrator | Sunday 05 April 2026 00:50:06 +0000 (0:00:00.615) 0:00:10.737 ********** 2026-04-05 00:52:26.344589 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:52:26.344595 | orchestrator | 2026-04-05 00:52:26.344600 | orchestrator | TASK [rabbitmq : Catch when RabbitMQ is being downgraded] ********************** 2026-04-05 00:52:26.344605 | orchestrator | Sunday 05 April 2026 00:50:06 +0000 (0:00:00.739) 0:00:11.477 ********** 2026-04-05 00:52:26.344610 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:52:26.344614 | orchestrator | 2026-04-05 00:52:26.344621 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2026-04-05 00:52:26.344626 | orchestrator | Sunday 05 April 2026 00:50:07 +0000 (0:00:00.447) 0:00:11.925 ********** 2026-04-05 00:52:26.344631 | orchestrator | included: /ansible/roles/rabbitmq/tasks/remove-ha-all-policy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 00:52:26.344636 | orchestrator | 2026-04-05 00:52:26.344641 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2026-04-05 00:52:26.344649 | orchestrator | Sunday 05 April 2026 00:50:08 +0000 (0:00:00.935) 0:00:12.860 ********** 2026-04-05 00:52:26.344654 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:52:26.344658 | orchestrator | 2026-04-05 00:52:26.344663 | orchestrator | TASK [rabbitmq : List RabbitMQ policies] *************************************** 2026-04-05 00:52:26.344668 | orchestrator | Sunday 05 April 2026 00:50:09 +0000 (0:00:00.934) 0:00:13.795 ********** 2026-04-05 00:52:26.344672 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:52:26.344677 | orchestrator | 2026-04-05 00:52:26.344682 | orchestrator | TASK [rabbitmq : Remove ha-all policy from RabbitMQ] *************************** 2026-04-05 00:52:26.344690 | orchestrator | Sunday 05 April 2026 00:50:10 +0000 (0:00:01.111) 0:00:14.906 ********** 2026-04-05 00:52:26.344696 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:52:26.344701 | orchestrator | 2026-04-05 00:52:26.344705 | orchestrator | TASK [rabbitmq : Ensuring config directories exist] **************************** 2026-04-05 00:52:26.344710 | orchestrator | Sunday 05 April 2026 00:50:11 +0000 (0:00:00.690) 0:00:15.596 ********** 2026-04-05 00:52:26.344719 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-04-05 00:52:26.344727 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-04-05 00:52:26.344740 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-04-05 00:52:26.344747 | orchestrator | 2026-04-05 00:52:26.344754 | orchestrator | TASK [rabbitmq : Copying over config.json files for services] ****************** 2026-04-05 00:52:26.344761 | orchestrator | Sunday 05 April 2026 00:50:12 +0000 (0:00:01.790) 0:00:17.387 ********** 2026-04-05 00:52:26.344773 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-04-05 00:52:26.344781 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-04-05 00:52:26.344793 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-04-05 00:52:26.344800 | orchestrator | 2026-04-05 00:52:26.344806 | orchestrator | TASK [rabbitmq : Copying over rabbitmq-env.conf] ******************************* 2026-04-05 00:52:26.344813 | orchestrator | Sunday 05 April 2026 00:50:14 +0000 (0:00:02.129) 0:00:19.517 ********** 2026-04-05 00:52:26.344819 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2026-04-05 00:52:26.344825 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2026-04-05 00:52:26.344832 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2026-04-05 00:52:26.344838 | orchestrator | 2026-04-05 00:52:26.344844 | orchestrator | TASK [rabbitmq : Copying over rabbitmq.conf] *********************************** 2026-04-05 00:52:26.344851 | orchestrator | Sunday 05 April 2026 00:50:17 +0000 (0:00:02.209) 0:00:21.726 ********** 2026-04-05 00:52:26.344861 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2026-04-05 00:52:26.344867 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2026-04-05 00:52:26.344874 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2026-04-05 00:52:26.344880 | orchestrator | 2026-04-05 00:52:26.344887 | orchestrator | TASK [rabbitmq : Copying over erl_inetrc] ************************************** 2026-04-05 00:52:26.344897 | orchestrator | Sunday 05 April 2026 00:50:21 +0000 (0:00:04.697) 0:00:26.423 ********** 2026-04-05 00:52:26.344904 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2026-04-05 00:52:26.344910 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2026-04-05 00:52:26.344917 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2026-04-05 00:52:26.344923 | orchestrator | 2026-04-05 00:52:26.344929 | orchestrator | TASK [rabbitmq : Copying over advanced.config] ********************************* 2026-04-05 00:52:26.344935 | orchestrator | Sunday 05 April 2026 00:50:23 +0000 (0:00:01.723) 0:00:28.146 ********** 2026-04-05 00:52:26.344942 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2026-04-05 00:52:26.344948 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2026-04-05 00:52:26.344954 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2026-04-05 00:52:26.344960 | orchestrator | 2026-04-05 00:52:26.344966 | orchestrator | TASK [rabbitmq : Copying over definitions.json] ******************************** 2026-04-05 00:52:26.344973 | orchestrator | Sunday 05 April 2026 00:50:25 +0000 (0:00:01.904) 0:00:30.051 ********** 2026-04-05 00:52:26.344979 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2026-04-05 00:52:26.344990 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2026-04-05 00:52:26.344997 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2026-04-05 00:52:26.345003 | orchestrator | 2026-04-05 00:52:26.345010 | orchestrator | TASK [rabbitmq : Copying over enabled_plugins] ********************************* 2026-04-05 00:52:26.345016 | orchestrator | Sunday 05 April 2026 00:50:27 +0000 (0:00:01.787) 0:00:31.838 ********** 2026-04-05 00:52:26.345022 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2026-04-05 00:52:26.345028 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2026-04-05 00:52:26.345035 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2026-04-05 00:52:26.345042 | orchestrator | 2026-04-05 00:52:26.345048 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2026-04-05 00:52:26.345054 | orchestrator | Sunday 05 April 2026 00:50:29 +0000 (0:00:02.034) 0:00:33.873 ********** 2026-04-05 00:52:26.345061 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:52:26.345065 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:52:26.345070 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:52:26.345074 | orchestrator | 2026-04-05 00:52:26.345078 | orchestrator | TASK [rabbitmq : Check rabbitmq containers] ************************************ 2026-04-05 00:52:26.345082 | orchestrator | Sunday 05 April 2026 00:50:29 +0000 (0:00:00.428) 0:00:34.301 ********** 2026-04-05 00:52:26.345087 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-04-05 00:52:26.345100 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-04-05 00:52:26.345105 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-04-05 00:52:26.345113 | orchestrator | 2026-04-05 00:52:26.345117 | orchestrator | TASK [rabbitmq : Creating rabbitmq volume] ************************************* 2026-04-05 00:52:26.345121 | orchestrator | Sunday 05 April 2026 00:50:31 +0000 (0:00:01.603) 0:00:35.905 ********** 2026-04-05 00:52:26.345125 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:52:26.345129 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:52:26.345133 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:52:26.345137 | orchestrator | 2026-04-05 00:52:26.345142 | orchestrator | TASK [rabbitmq : Running RabbitMQ bootstrap container] ************************* 2026-04-05 00:52:26.345149 | orchestrator | Sunday 05 April 2026 00:50:32 +0000 (0:00:01.156) 0:00:37.061 ********** 2026-04-05 00:52:26.345155 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:52:26.345159 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:52:26.345163 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:52:26.345167 | orchestrator | 2026-04-05 00:52:26.345171 | orchestrator | RUNNING HANDLER [rabbitmq : Restart rabbitmq container] ************************ 2026-04-05 00:52:26.345175 | orchestrator | Sunday 05 April 2026 00:50:43 +0000 (0:00:11.150) 0:00:48.212 ********** 2026-04-05 00:52:26.345179 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:52:26.345183 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:52:26.345189 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:52:26.345196 | orchestrator | 2026-04-05 00:52:26.345200 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2026-04-05 00:52:26.345205 | orchestrator | 2026-04-05 00:52:26.345212 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2026-04-05 00:52:26.345218 | orchestrator | Sunday 05 April 2026 00:50:44 +0000 (0:00:00.457) 0:00:48.669 ********** 2026-04-05 00:52:26.345225 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:52:26.345232 | orchestrator | 2026-04-05 00:52:26.345238 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2026-04-05 00:52:26.345244 | orchestrator | Sunday 05 April 2026 00:50:44 +0000 (0:00:00.736) 0:00:49.405 ********** 2026-04-05 00:52:26.345251 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:52:26.345257 | orchestrator | 2026-04-05 00:52:26.345264 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2026-04-05 00:52:26.345270 | orchestrator | Sunday 05 April 2026 00:50:45 +0000 (0:00:00.225) 0:00:49.631 ********** 2026-04-05 00:52:26.345277 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:52:26.345283 | orchestrator | 2026-04-05 00:52:26.345288 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2026-04-05 00:52:26.345294 | orchestrator | Sunday 05 April 2026 00:50:52 +0000 (0:00:07.002) 0:00:56.633 ********** 2026-04-05 00:52:26.345300 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:52:26.345306 | orchestrator | 2026-04-05 00:52:26.345312 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2026-04-05 00:52:26.345318 | orchestrator | 2026-04-05 00:52:26.345324 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2026-04-05 00:52:26.345330 | orchestrator | Sunday 05 April 2026 00:51:45 +0000 (0:00:53.111) 0:01:49.745 ********** 2026-04-05 00:52:26.345337 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:52:26.345343 | orchestrator | 2026-04-05 00:52:26.345349 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2026-04-05 00:52:26.345360 | orchestrator | Sunday 05 April 2026 00:51:45 +0000 (0:00:00.687) 0:01:50.433 ********** 2026-04-05 00:52:26.345366 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:52:26.345373 | orchestrator | 2026-04-05 00:52:26.345379 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2026-04-05 00:52:26.345385 | orchestrator | Sunday 05 April 2026 00:51:46 +0000 (0:00:00.308) 0:01:50.742 ********** 2026-04-05 00:52:26.345390 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:52:26.345396 | orchestrator | 2026-04-05 00:52:26.345402 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2026-04-05 00:52:26.345428 | orchestrator | Sunday 05 April 2026 00:51:48 +0000 (0:00:02.087) 0:01:52.830 ********** 2026-04-05 00:52:26.345436 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:52:26.345442 | orchestrator | 2026-04-05 00:52:26.345448 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2026-04-05 00:52:26.345454 | orchestrator | 2026-04-05 00:52:26.345461 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2026-04-05 00:52:26.345467 | orchestrator | Sunday 05 April 2026 00:52:03 +0000 (0:00:15.422) 0:02:08.253 ********** 2026-04-05 00:52:26.345473 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:52:26.345480 | orchestrator | 2026-04-05 00:52:26.345492 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2026-04-05 00:52:26.345499 | orchestrator | Sunday 05 April 2026 00:52:04 +0000 (0:00:00.708) 0:02:08.961 ********** 2026-04-05 00:52:26.345505 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:52:26.345512 | orchestrator | 2026-04-05 00:52:26.345519 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2026-04-05 00:52:26.345525 | orchestrator | Sunday 05 April 2026 00:52:04 +0000 (0:00:00.232) 0:02:09.194 ********** 2026-04-05 00:52:26.345531 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:52:26.345538 | orchestrator | 2026-04-05 00:52:26.345547 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2026-04-05 00:52:26.345555 | orchestrator | Sunday 05 April 2026 00:52:11 +0000 (0:00:07.246) 0:02:16.441 ********** 2026-04-05 00:52:26.345561 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:52:26.345568 | orchestrator | 2026-04-05 00:52:26.345574 | orchestrator | PLAY [Apply rabbitmq post-configuration] *************************************** 2026-04-05 00:52:26.345581 | orchestrator | 2026-04-05 00:52:26.345587 | orchestrator | TASK [Include rabbitmq post-deploy.yml] **************************************** 2026-04-05 00:52:26.345593 | orchestrator | Sunday 05 April 2026 00:52:22 +0000 (0:00:10.763) 0:02:27.204 ********** 2026-04-05 00:52:26.345599 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 00:52:26.345606 | orchestrator | 2026-04-05 00:52:26.345612 | orchestrator | TASK [rabbitmq : Enable all stable feature flags] ****************************** 2026-04-05 00:52:26.345619 | orchestrator | Sunday 05 April 2026 00:52:23 +0000 (0:00:00.743) 0:02:27.947 ********** 2026-04-05 00:52:26.345625 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:52:26.345633 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:52:26.345639 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:52:26.345646 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2026-04-05 00:52:26.345652 | orchestrator | enable_outward_rabbitmq_True 2026-04-05 00:52:26.345659 | orchestrator | 2026-04-05 00:52:26.345665 | orchestrator | PLAY [Apply role rabbitmq (outward)] ******************************************* 2026-04-05 00:52:26.345672 | orchestrator | skipping: no hosts matched 2026-04-05 00:52:26.345678 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2026-04-05 00:52:26.345685 | orchestrator | outward_rabbitmq_restart 2026-04-05 00:52:26.345692 | orchestrator | 2026-04-05 00:52:26.345698 | orchestrator | PLAY [Restart rabbitmq (outward) services] ************************************* 2026-04-05 00:52:26.345705 | orchestrator | skipping: no hosts matched 2026-04-05 00:52:26.345712 | orchestrator | 2026-04-05 00:52:26.345718 | orchestrator | PLAY [Apply rabbitmq (outward) post-configuration] ***************************** 2026-04-05 00:52:26.345737 | orchestrator | skipping: no hosts matched 2026-04-05 00:52:26.345744 | orchestrator | 2026-04-05 00:52:26.345751 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-05 00:52:26.345758 | orchestrator | localhost : ok=3  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=1  2026-04-05 00:52:26.345766 | orchestrator | testbed-node-0 : ok=23  changed=14  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2026-04-05 00:52:26.345773 | orchestrator | testbed-node-1 : ok=21  changed=14  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-05 00:52:26.345780 | orchestrator | testbed-node-2 : ok=21  changed=14  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-05 00:52:26.345786 | orchestrator | 2026-04-05 00:52:26.345794 | orchestrator | 2026-04-05 00:52:26.345801 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-05 00:52:26.345807 | orchestrator | Sunday 05 April 2026 00:52:25 +0000 (0:00:02.444) 0:02:30.391 ********** 2026-04-05 00:52:26.345815 | orchestrator | =============================================================================== 2026-04-05 00:52:26.345822 | orchestrator | rabbitmq : Waiting for rabbitmq to start ------------------------------- 79.30s 2026-04-05 00:52:26.345828 | orchestrator | rabbitmq : Restart rabbitmq container ---------------------------------- 16.34s 2026-04-05 00:52:26.345834 | orchestrator | rabbitmq : Running RabbitMQ bootstrap container ------------------------ 11.15s 2026-04-05 00:52:26.345841 | orchestrator | rabbitmq : Copying over rabbitmq.conf ----------------------------------- 4.70s 2026-04-05 00:52:26.345848 | orchestrator | Check RabbitMQ service -------------------------------------------------- 4.40s 2026-04-05 00:52:26.345855 | orchestrator | rabbitmq : Enable all stable feature flags ------------------------------ 2.44s 2026-04-05 00:52:26.345859 | orchestrator | rabbitmq : Copying over rabbitmq-env.conf ------------------------------- 2.21s 2026-04-05 00:52:26.345863 | orchestrator | rabbitmq : Get info on RabbitMQ container ------------------------------- 2.13s 2026-04-05 00:52:26.345867 | orchestrator | rabbitmq : Copying over config.json files for services ------------------ 2.13s 2026-04-05 00:52:26.345871 | orchestrator | rabbitmq : Copying over enabled_plugins --------------------------------- 2.03s 2026-04-05 00:52:26.345875 | orchestrator | rabbitmq : Copying over advanced.config --------------------------------- 1.90s 2026-04-05 00:52:26.345879 | orchestrator | rabbitmq : Ensuring config directories exist ---------------------------- 1.79s 2026-04-05 00:52:26.345888 | orchestrator | rabbitmq : Copying over definitions.json -------------------------------- 1.79s 2026-04-05 00:52:26.345893 | orchestrator | rabbitmq : Copying over erl_inetrc -------------------------------------- 1.72s 2026-04-05 00:52:26.345897 | orchestrator | rabbitmq : Check rabbitmq containers ------------------------------------ 1.60s 2026-04-05 00:52:26.345901 | orchestrator | rabbitmq : include_tasks ------------------------------------------------ 1.46s 2026-04-05 00:52:26.345905 | orchestrator | rabbitmq : Get container facts ------------------------------------------ 1.40s 2026-04-05 00:52:26.345914 | orchestrator | rabbitmq : Creating rabbitmq volume ------------------------------------- 1.16s 2026-04-05 00:52:26.345919 | orchestrator | rabbitmq : List RabbitMQ policies --------------------------------------- 1.11s 2026-04-05 00:52:26.345923 | orchestrator | rabbitmq : include_tasks ------------------------------------------------ 0.94s 2026-04-05 00:52:26.345927 | orchestrator | 2026-04-05 00:52:26 | INFO  | Task 36e09ef4-3478-4758-b2f7-0ed2ea2542bd is in state SUCCESS 2026-04-05 00:52:26.345932 | orchestrator | 2026-04-05 00:52:26 | INFO  | Task 1bc5b308-4c5f-49b2-92a5-1a49ed90e82d is in state STARTED 2026-04-05 00:52:26.345936 | orchestrator | 2026-04-05 00:52:26 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:52:29.388640 | orchestrator | 2026-04-05 00:52:29 | INFO  | Task f0ba49c2-0d7c-4680-86ea-ac304f7a27d0 is in state STARTED 2026-04-05 00:52:29.390427 | orchestrator | 2026-04-05 00:52:29 | INFO  | Task bcbda999-be68-4903-8396-551ca0f1c657 is in state STARTED 2026-04-05 00:52:29.391507 | orchestrator | 2026-04-05 00:52:29 | INFO  | Task 1bc5b308-4c5f-49b2-92a5-1a49ed90e82d is in state STARTED 2026-04-05 00:52:29.391650 | orchestrator | 2026-04-05 00:52:29 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:52:32.446775 | orchestrator | 2026-04-05 00:52:32 | INFO  | Task f0ba49c2-0d7c-4680-86ea-ac304f7a27d0 is in state STARTED 2026-04-05 00:52:32.447082 | orchestrator | 2026-04-05 00:52:32 | INFO  | Task bcbda999-be68-4903-8396-551ca0f1c657 is in state STARTED 2026-04-05 00:52:32.451769 | orchestrator | 2026-04-05 00:52:32 | INFO  | Task 1bc5b308-4c5f-49b2-92a5-1a49ed90e82d is in state STARTED 2026-04-05 00:52:32.451863 | orchestrator | 2026-04-05 00:52:32 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:52:35.481140 | orchestrator | 2026-04-05 00:52:35 | INFO  | Task f0ba49c2-0d7c-4680-86ea-ac304f7a27d0 is in state STARTED 2026-04-05 00:52:35.482153 | orchestrator | 2026-04-05 00:52:35 | INFO  | Task bcbda999-be68-4903-8396-551ca0f1c657 is in state STARTED 2026-04-05 00:52:35.482738 | orchestrator | 2026-04-05 00:52:35 | INFO  | Task 1bc5b308-4c5f-49b2-92a5-1a49ed90e82d is in state STARTED 2026-04-05 00:52:35.483164 | orchestrator | 2026-04-05 00:52:35 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:52:38.523801 | orchestrator | 2026-04-05 00:52:38 | INFO  | Task f0ba49c2-0d7c-4680-86ea-ac304f7a27d0 is in state STARTED 2026-04-05 00:52:38.524505 | orchestrator | 2026-04-05 00:52:38 | INFO  | Task bcbda999-be68-4903-8396-551ca0f1c657 is in state STARTED 2026-04-05 00:52:38.525602 | orchestrator | 2026-04-05 00:52:38 | INFO  | Task 1bc5b308-4c5f-49b2-92a5-1a49ed90e82d is in state STARTED 2026-04-05 00:52:38.525634 | orchestrator | 2026-04-05 00:52:38 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:52:41.568763 | orchestrator | 2026-04-05 00:52:41 | INFO  | Task f0ba49c2-0d7c-4680-86ea-ac304f7a27d0 is in state STARTED 2026-04-05 00:52:41.569081 | orchestrator | 2026-04-05 00:52:41 | INFO  | Task bcbda999-be68-4903-8396-551ca0f1c657 is in state STARTED 2026-04-05 00:52:41.571537 | orchestrator | 2026-04-05 00:52:41 | INFO  | Task 1bc5b308-4c5f-49b2-92a5-1a49ed90e82d is in state STARTED 2026-04-05 00:52:41.571632 | orchestrator | 2026-04-05 00:52:41 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:52:44.605640 | orchestrator | 2026-04-05 00:52:44 | INFO  | Task f0ba49c2-0d7c-4680-86ea-ac304f7a27d0 is in state STARTED 2026-04-05 00:52:44.606385 | orchestrator | 2026-04-05 00:52:44 | INFO  | Task bcbda999-be68-4903-8396-551ca0f1c657 is in state STARTED 2026-04-05 00:52:44.608599 | orchestrator | 2026-04-05 00:52:44 | INFO  | Task 1bc5b308-4c5f-49b2-92a5-1a49ed90e82d is in state STARTED 2026-04-05 00:52:44.608649 | orchestrator | 2026-04-05 00:52:44 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:52:47.632616 | orchestrator | 2026-04-05 00:52:47 | INFO  | Task f0ba49c2-0d7c-4680-86ea-ac304f7a27d0 is in state STARTED 2026-04-05 00:52:47.635211 | orchestrator | 2026-04-05 00:52:47 | INFO  | Task bcbda999-be68-4903-8396-551ca0f1c657 is in state STARTED 2026-04-05 00:52:47.637815 | orchestrator | 2026-04-05 00:52:47 | INFO  | Task 1bc5b308-4c5f-49b2-92a5-1a49ed90e82d is in state STARTED 2026-04-05 00:52:47.637877 | orchestrator | 2026-04-05 00:52:47 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:52:50.677464 | orchestrator | 2026-04-05 00:52:50 | INFO  | Task f0ba49c2-0d7c-4680-86ea-ac304f7a27d0 is in state STARTED 2026-04-05 00:52:50.680026 | orchestrator | 2026-04-05 00:52:50 | INFO  | Task bcbda999-be68-4903-8396-551ca0f1c657 is in state STARTED 2026-04-05 00:52:50.680140 | orchestrator | 2026-04-05 00:52:50 | INFO  | Task 1bc5b308-4c5f-49b2-92a5-1a49ed90e82d is in state STARTED 2026-04-05 00:52:50.680918 | orchestrator | 2026-04-05 00:52:50 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:52:53.718517 | orchestrator | 2026-04-05 00:52:53 | INFO  | Task f0ba49c2-0d7c-4680-86ea-ac304f7a27d0 is in state STARTED 2026-04-05 00:52:53.718621 | orchestrator | 2026-04-05 00:52:53 | INFO  | Task bcbda999-be68-4903-8396-551ca0f1c657 is in state STARTED 2026-04-05 00:52:53.719227 | orchestrator | 2026-04-05 00:52:53 | INFO  | Task 1bc5b308-4c5f-49b2-92a5-1a49ed90e82d is in state STARTED 2026-04-05 00:52:53.719252 | orchestrator | 2026-04-05 00:52:53 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:52:56.757059 | orchestrator | 2026-04-05 00:52:56 | INFO  | Task f0ba49c2-0d7c-4680-86ea-ac304f7a27d0 is in state STARTED 2026-04-05 00:52:56.758058 | orchestrator | 2026-04-05 00:52:56 | INFO  | Task bcbda999-be68-4903-8396-551ca0f1c657 is in state STARTED 2026-04-05 00:52:56.759656 | orchestrator | 2026-04-05 00:52:56 | INFO  | Task 1bc5b308-4c5f-49b2-92a5-1a49ed90e82d is in state STARTED 2026-04-05 00:52:56.759770 | orchestrator | 2026-04-05 00:52:56 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:52:59.799560 | orchestrator | 2026-04-05 00:52:59 | INFO  | Task f0ba49c2-0d7c-4680-86ea-ac304f7a27d0 is in state STARTED 2026-04-05 00:52:59.802312 | orchestrator | 2026-04-05 00:52:59 | INFO  | Task bcbda999-be68-4903-8396-551ca0f1c657 is in state STARTED 2026-04-05 00:52:59.802921 | orchestrator | 2026-04-05 00:52:59 | INFO  | Task 1bc5b308-4c5f-49b2-92a5-1a49ed90e82d is in state STARTED 2026-04-05 00:52:59.802988 | orchestrator | 2026-04-05 00:52:59 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:53:02.850991 | orchestrator | 2026-04-05 00:53:02 | INFO  | Task f0ba49c2-0d7c-4680-86ea-ac304f7a27d0 is in state STARTED 2026-04-05 00:53:02.852692 | orchestrator | 2026-04-05 00:53:02 | INFO  | Task bcbda999-be68-4903-8396-551ca0f1c657 is in state STARTED 2026-04-05 00:53:02.852746 | orchestrator | 2026-04-05 00:53:02 | INFO  | Task 1bc5b308-4c5f-49b2-92a5-1a49ed90e82d is in state STARTED 2026-04-05 00:53:02.852757 | orchestrator | 2026-04-05 00:53:02 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:53:05.895825 | orchestrator | 2026-04-05 00:53:05 | INFO  | Task f0ba49c2-0d7c-4680-86ea-ac304f7a27d0 is in state STARTED 2026-04-05 00:53:05.897965 | orchestrator | 2026-04-05 00:53:05 | INFO  | Task bcbda999-be68-4903-8396-551ca0f1c657 is in state STARTED 2026-04-05 00:53:05.901047 | orchestrator | 2026-04-05 00:53:05 | INFO  | Task 1bc5b308-4c5f-49b2-92a5-1a49ed90e82d is in state STARTED 2026-04-05 00:53:05.901108 | orchestrator | 2026-04-05 00:53:05 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:53:08.956586 | orchestrator | 2026-04-05 00:53:08 | INFO  | Task f0ba49c2-0d7c-4680-86ea-ac304f7a27d0 is in state STARTED 2026-04-05 00:53:08.956857 | orchestrator | 2026-04-05 00:53:08 | INFO  | Task bcbda999-be68-4903-8396-551ca0f1c657 is in state STARTED 2026-04-05 00:53:08.957902 | orchestrator | 2026-04-05 00:53:08 | INFO  | Task 1bc5b308-4c5f-49b2-92a5-1a49ed90e82d is in state STARTED 2026-04-05 00:53:08.957946 | orchestrator | 2026-04-05 00:53:08 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:53:11.998974 | orchestrator | 2026-04-05 00:53:11 | INFO  | Task f0ba49c2-0d7c-4680-86ea-ac304f7a27d0 is in state STARTED 2026-04-05 00:53:12.000955 | orchestrator | 2026-04-05 00:53:11 | INFO  | Task bcbda999-be68-4903-8396-551ca0f1c657 is in state STARTED 2026-04-05 00:53:12.002883 | orchestrator | 2026-04-05 00:53:12 | INFO  | Task 1bc5b308-4c5f-49b2-92a5-1a49ed90e82d is in state STARTED 2026-04-05 00:53:12.002950 | orchestrator | 2026-04-05 00:53:12 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:53:15.046050 | orchestrator | 2026-04-05 00:53:15 | INFO  | Task f0ba49c2-0d7c-4680-86ea-ac304f7a27d0 is in state STARTED 2026-04-05 00:53:15.046924 | orchestrator | 2026-04-05 00:53:15 | INFO  | Task bcbda999-be68-4903-8396-551ca0f1c657 is in state STARTED 2026-04-05 00:53:15.050371 | orchestrator | 2026-04-05 00:53:15 | INFO  | Task 1bc5b308-4c5f-49b2-92a5-1a49ed90e82d is in state STARTED 2026-04-05 00:53:15.050465 | orchestrator | 2026-04-05 00:53:15 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:53:18.105713 | orchestrator | 2026-04-05 00:53:18 | INFO  | Task f0ba49c2-0d7c-4680-86ea-ac304f7a27d0 is in state STARTED 2026-04-05 00:53:18.107811 | orchestrator | 2026-04-05 00:53:18 | INFO  | Task bcbda999-be68-4903-8396-551ca0f1c657 is in state STARTED 2026-04-05 00:53:18.109972 | orchestrator | 2026-04-05 00:53:18 | INFO  | Task 1bc5b308-4c5f-49b2-92a5-1a49ed90e82d is in state STARTED 2026-04-05 00:53:18.110068 | orchestrator | 2026-04-05 00:53:18 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:53:21.144477 | orchestrator | 2026-04-05 00:53:21 | INFO  | Task f0ba49c2-0d7c-4680-86ea-ac304f7a27d0 is in state STARTED 2026-04-05 00:53:21.145378 | orchestrator | 2026-04-05 00:53:21 | INFO  | Task bcbda999-be68-4903-8396-551ca0f1c657 is in state STARTED 2026-04-05 00:53:21.147087 | orchestrator | 2026-04-05 00:53:21 | INFO  | Task 1bc5b308-4c5f-49b2-92a5-1a49ed90e82d is in state STARTED 2026-04-05 00:53:21.147140 | orchestrator | 2026-04-05 00:53:21 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:53:24.176069 | orchestrator | 2026-04-05 00:53:24 | INFO  | Task f0ba49c2-0d7c-4680-86ea-ac304f7a27d0 is in state STARTED 2026-04-05 00:53:24.176659 | orchestrator | 2026-04-05 00:53:24 | INFO  | Task bcbda999-be68-4903-8396-551ca0f1c657 is in state STARTED 2026-04-05 00:53:24.178867 | orchestrator | 2026-04-05 00:53:24 | INFO  | Task 1bc5b308-4c5f-49b2-92a5-1a49ed90e82d is in state STARTED 2026-04-05 00:53:24.178927 | orchestrator | 2026-04-05 00:53:24 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:53:27.227820 | orchestrator | 2026-04-05 00:53:27 | INFO  | Task f0ba49c2-0d7c-4680-86ea-ac304f7a27d0 is in state SUCCESS 2026-04-05 00:53:27.229200 | orchestrator | 2026-04-05 00:53:27.229329 | orchestrator | 2026-04-05 00:53:27.229348 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-05 00:53:27.229464 | orchestrator | 2026-04-05 00:53:27.229480 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-05 00:53:27.229583 | orchestrator | Sunday 05 April 2026 00:50:48 +0000 (0:00:00.221) 0:00:00.221 ********** 2026-04-05 00:53:27.229598 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:53:27.229610 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:53:27.229621 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:53:27.229632 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:53:27.229643 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:53:27.229653 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:53:27.229664 | orchestrator | 2026-04-05 00:53:27.229675 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-05 00:53:27.229686 | orchestrator | Sunday 05 April 2026 00:50:49 +0000 (0:00:00.703) 0:00:00.924 ********** 2026-04-05 00:53:27.229697 | orchestrator | ok: [testbed-node-3] => (item=enable_ovn_True) 2026-04-05 00:53:27.229708 | orchestrator | ok: [testbed-node-4] => (item=enable_ovn_True) 2026-04-05 00:53:27.229719 | orchestrator | ok: [testbed-node-5] => (item=enable_ovn_True) 2026-04-05 00:53:27.229730 | orchestrator | ok: [testbed-node-0] => (item=enable_ovn_True) 2026-04-05 00:53:27.229768 | orchestrator | ok: [testbed-node-1] => (item=enable_ovn_True) 2026-04-05 00:53:27.229783 | orchestrator | ok: [testbed-node-2] => (item=enable_ovn_True) 2026-04-05 00:53:27.229795 | orchestrator | 2026-04-05 00:53:27.229808 | orchestrator | PLAY [Apply role ovn-controller] *********************************************** 2026-04-05 00:53:27.229821 | orchestrator | 2026-04-05 00:53:27.229833 | orchestrator | TASK [ovn-controller : include_tasks] ****************************************** 2026-04-05 00:53:27.229846 | orchestrator | Sunday 05 April 2026 00:50:50 +0000 (0:00:00.860) 0:00:01.784 ********** 2026-04-05 00:53:27.229859 | orchestrator | included: /ansible/roles/ovn-controller/tasks/deploy.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 00:53:27.229873 | orchestrator | 2026-04-05 00:53:27.229886 | orchestrator | TASK [ovn-controller : Ensuring config directories exist] ********************** 2026-04-05 00:53:27.229898 | orchestrator | Sunday 05 April 2026 00:50:51 +0000 (0:00:01.042) 0:00:02.827 ********** 2026-04-05 00:53:27.229913 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 00:53:27.229930 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 00:53:27.229943 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 00:53:27.229956 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 00:53:27.229969 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 00:53:27.229982 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 00:53:27.229994 | orchestrator | 2026-04-05 00:53:27.230085 | orchestrator | TASK [ovn-controller : Copying over config.json files for services] ************ 2026-04-05 00:53:27.230102 | orchestrator | Sunday 05 April 2026 00:50:52 +0000 (0:00:01.837) 0:00:04.664 ********** 2026-04-05 00:53:27.230116 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 00:53:27.230227 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 00:53:27.230250 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 00:53:27.230262 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 00:53:27.230273 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 00:53:27.230290 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 00:53:27.230301 | orchestrator | 2026-04-05 00:53:27.230312 | orchestrator | TASK [ovn-controller : Ensuring systemd override directory exists] ************* 2026-04-05 00:53:27.230323 | orchestrator | Sunday 05 April 2026 00:50:55 +0000 (0:00:02.214) 0:00:06.879 ********** 2026-04-05 00:53:27.230335 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 00:53:27.230346 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 00:53:27.230367 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 00:53:27.230386 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 00:53:27.230503 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 00:53:27.230520 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 00:53:27.230532 | orchestrator | 2026-04-05 00:53:27.230544 | orchestrator | TASK [ovn-controller : Copying over systemd override] ************************** 2026-04-05 00:53:27.230555 | orchestrator | Sunday 05 April 2026 00:50:56 +0000 (0:00:01.756) 0:00:08.635 ********** 2026-04-05 00:53:27.230566 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 00:53:27.230578 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 00:53:27.230596 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 00:53:27.230607 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 00:53:27.230619 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 00:53:27.230630 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 00:53:27.230649 | orchestrator | 2026-04-05 00:53:27.230669 | orchestrator | TASK [ovn-controller : Check ovn-controller containers] ************************ 2026-04-05 00:53:27.230681 | orchestrator | Sunday 05 April 2026 00:50:59 +0000 (0:00:02.208) 0:00:10.844 ********** 2026-04-05 00:53:27.230693 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 00:53:27.230704 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 00:53:27.230715 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 00:53:27.230726 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 00:53:27.230737 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 00:53:27.230754 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 00:53:27.230766 | orchestrator | 2026-04-05 00:53:27.230777 | orchestrator | TASK [ovn-controller : Create br-int bridge on OpenvSwitch] ******************** 2026-04-05 00:53:27.230789 | orchestrator | Sunday 05 April 2026 00:51:01 +0000 (0:00:02.400) 0:00:13.245 ********** 2026-04-05 00:53:27.230800 | orchestrator | changed: [testbed-node-4] 2026-04-05 00:53:27.230811 | orchestrator | changed: [testbed-node-3] 2026-04-05 00:53:27.230822 | orchestrator | changed: [testbed-node-5] 2026-04-05 00:53:27.230833 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:53:27.230844 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:53:27.230854 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:53:27.230865 | orchestrator | 2026-04-05 00:53:27.230883 | orchestrator | TASK [ovn-controller : Configure OVN in OVSDB] ********************************* 2026-04-05 00:53:27.230894 | orchestrator | Sunday 05 April 2026 00:51:04 +0000 (0:00:02.866) 0:00:16.112 ********** 2026-04-05 00:53:27.230905 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.13'}) 2026-04-05 00:53:27.230916 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.14'}) 2026-04-05 00:53:27.230927 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.15'}) 2026-04-05 00:53:27.230938 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.12'}) 2026-04-05 00:53:27.230948 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-04-05 00:53:27.230959 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.11'}) 2026-04-05 00:53:27.230970 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.10'}) 2026-04-05 00:53:27.230981 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-04-05 00:53:27.230996 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-04-05 00:53:27.231006 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-04-05 00:53:27.231016 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-04-05 00:53:27.231027 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-04-05 00:53:27.231037 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-04-05 00:53:27.231047 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-04-05 00:53:27.231057 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-04-05 00:53:27.231066 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-04-05 00:53:27.231078 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-04-05 00:53:27.231087 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-04-05 00:53:27.231097 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-04-05 00:53:27.231107 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-04-05 00:53:27.231116 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-04-05 00:53:27.231126 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-04-05 00:53:27.231136 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-04-05 00:53:27.231145 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-04-05 00:53:27.231154 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-04-05 00:53:27.231164 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-04-05 00:53:27.231173 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-04-05 00:53:27.231183 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-04-05 00:53:27.231199 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-04-05 00:53:27.231208 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-04-05 00:53:27.231218 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-04-05 00:53:27.231232 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-04-05 00:53:27.231242 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2026-04-05 00:53:27.231251 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-04-05 00:53:27.231261 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-04-05 00:53:27.231270 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-04-05 00:53:27.231280 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2026-04-05 00:53:27.231289 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-04-05 00:53:27.231299 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:89:18:56', 'state': 'present'}) 2026-04-05 00:53:27.231309 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2026-04-05 00:53:27.231318 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2026-04-05 00:53:27.231328 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:2f:fa:44', 'state': 'present'}) 2026-04-05 00:53:27.231337 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2026-04-05 00:53:27.231347 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2026-04-05 00:53:27.231361 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2026-04-05 00:53:27.231372 | orchestrator | ok: [testbed-node-2] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:29:4a:9b', 'state': 'absent'}) 2026-04-05 00:53:27.231382 | orchestrator | ok: [testbed-node-1] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:33:12:50', 'state': 'absent'}) 2026-04-05 00:53:27.231391 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2026-04-05 00:53:27.231418 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:71:3a:c3', 'state': 'present'}) 2026-04-05 00:53:27.231428 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2026-04-05 00:53:27.231437 | orchestrator | ok: [testbed-node-0] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:52:c1:40', 'state': 'absent'}) 2026-04-05 00:53:27.231447 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2026-04-05 00:53:27.231457 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2026-04-05 00:53:27.231466 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2026-04-05 00:53:27.231477 | orchestrator | 2026-04-05 00:53:27.231487 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-04-05 00:53:27.231497 | orchestrator | Sunday 05 April 2026 00:51:25 +0000 (0:00:21.192) 0:00:37.305 ********** 2026-04-05 00:53:27.231514 | orchestrator | 2026-04-05 00:53:27.231523 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-04-05 00:53:27.231533 | orchestrator | Sunday 05 April 2026 00:51:25 +0000 (0:00:00.140) 0:00:37.445 ********** 2026-04-05 00:53:27.231543 | orchestrator | 2026-04-05 00:53:27.231553 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-04-05 00:53:27.231563 | orchestrator | Sunday 05 April 2026 00:51:25 +0000 (0:00:00.147) 0:00:37.592 ********** 2026-04-05 00:53:27.231573 | orchestrator | 2026-04-05 00:53:27.231583 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-04-05 00:53:27.231593 | orchestrator | Sunday 05 April 2026 00:51:26 +0000 (0:00:00.244) 0:00:37.836 ********** 2026-04-05 00:53:27.231602 | orchestrator | 2026-04-05 00:53:27.231613 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-04-05 00:53:27.231622 | orchestrator | Sunday 05 April 2026 00:51:26 +0000 (0:00:00.146) 0:00:37.983 ********** 2026-04-05 00:53:27.231632 | orchestrator | 2026-04-05 00:53:27.231642 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-04-05 00:53:27.231651 | orchestrator | Sunday 05 April 2026 00:51:26 +0000 (0:00:00.172) 0:00:38.155 ********** 2026-04-05 00:53:27.231661 | orchestrator | 2026-04-05 00:53:27.231671 | orchestrator | RUNNING HANDLER [ovn-controller : Reload systemd config] *********************** 2026-04-05 00:53:27.231680 | orchestrator | Sunday 05 April 2026 00:51:26 +0000 (0:00:00.140) 0:00:38.296 ********** 2026-04-05 00:53:27.231690 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:53:27.231700 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:53:27.231710 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:53:27.231720 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:53:27.231730 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:53:27.231739 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:53:27.231749 | orchestrator | 2026-04-05 00:53:27.231764 | orchestrator | RUNNING HANDLER [ovn-controller : Restart ovn-controller container] ************ 2026-04-05 00:53:27.231774 | orchestrator | Sunday 05 April 2026 00:51:30 +0000 (0:00:04.110) 0:00:42.406 ********** 2026-04-05 00:53:27.231784 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:53:27.231794 | orchestrator | changed: [testbed-node-3] 2026-04-05 00:53:27.231803 | orchestrator | changed: [testbed-node-5] 2026-04-05 00:53:27.231814 | orchestrator | changed: [testbed-node-4] 2026-04-05 00:53:27.231823 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:53:27.231833 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:53:27.231843 | orchestrator | 2026-04-05 00:53:27.231853 | orchestrator | PLAY [Apply role ovn-db] ******************************************************* 2026-04-05 00:53:27.231877 | orchestrator | 2026-04-05 00:53:27.231897 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2026-04-05 00:53:27.231908 | orchestrator | Sunday 05 April 2026 00:51:59 +0000 (0:00:28.770) 0:01:11.177 ********** 2026-04-05 00:53:27.231918 | orchestrator | included: /ansible/roles/ovn-db/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 00:53:27.231927 | orchestrator | 2026-04-05 00:53:27.231937 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2026-04-05 00:53:27.231947 | orchestrator | Sunday 05 April 2026 00:52:00 +0000 (0:00:00.552) 0:01:11.729 ********** 2026-04-05 00:53:27.231957 | orchestrator | included: /ansible/roles/ovn-db/tasks/lookup_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 00:53:27.231967 | orchestrator | 2026-04-05 00:53:27.231977 | orchestrator | TASK [ovn-db : Checking for any existing OVN DB container volumes] ************* 2026-04-05 00:53:27.231986 | orchestrator | Sunday 05 April 2026 00:52:00 +0000 (0:00:00.812) 0:01:12.541 ********** 2026-04-05 00:53:27.231996 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:53:27.232006 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:53:27.232015 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:53:27.232025 | orchestrator | 2026-04-05 00:53:27.232034 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB volume availability] *************** 2026-04-05 00:53:27.232044 | orchestrator | Sunday 05 April 2026 00:52:01 +0000 (0:00:01.005) 0:01:13.547 ********** 2026-04-05 00:53:27.232060 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:53:27.232070 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:53:27.232079 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:53:27.232095 | orchestrator | 2026-04-05 00:53:27.232106 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB volume availability] *************** 2026-04-05 00:53:27.232115 | orchestrator | Sunday 05 April 2026 00:52:02 +0000 (0:00:00.419) 0:01:13.966 ********** 2026-04-05 00:53:27.232125 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:53:27.232135 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:53:27.232144 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:53:27.232154 | orchestrator | 2026-04-05 00:53:27.232164 | orchestrator | TASK [ovn-db : Establish whether the OVN NB cluster has already existed] ******* 2026-04-05 00:53:27.232173 | orchestrator | Sunday 05 April 2026 00:52:02 +0000 (0:00:00.534) 0:01:14.500 ********** 2026-04-05 00:53:27.232183 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:53:27.232193 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:53:27.232202 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:53:27.232213 | orchestrator | 2026-04-05 00:53:27.232222 | orchestrator | TASK [ovn-db : Establish whether the OVN SB cluster has already existed] ******* 2026-04-05 00:53:27.232232 | orchestrator | Sunday 05 April 2026 00:52:03 +0000 (0:00:00.377) 0:01:14.878 ********** 2026-04-05 00:53:27.232241 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:53:27.232251 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:53:27.232261 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:53:27.232271 | orchestrator | 2026-04-05 00:53:27.232281 | orchestrator | TASK [ovn-db : Check if running on all OVN NB DB hosts] ************************ 2026-04-05 00:53:27.232290 | orchestrator | Sunday 05 April 2026 00:52:03 +0000 (0:00:00.321) 0:01:15.200 ********** 2026-04-05 00:53:27.232300 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:53:27.232310 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:53:27.232320 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:53:27.232329 | orchestrator | 2026-04-05 00:53:27.232339 | orchestrator | TASK [ovn-db : Check OVN NB service port liveness] ***************************** 2026-04-05 00:53:27.232349 | orchestrator | Sunday 05 April 2026 00:52:03 +0000 (0:00:00.300) 0:01:15.500 ********** 2026-04-05 00:53:27.232359 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:53:27.232368 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:53:27.232378 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:53:27.232387 | orchestrator | 2026-04-05 00:53:27.232456 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB service port liveness] ************* 2026-04-05 00:53:27.232467 | orchestrator | Sunday 05 April 2026 00:52:04 +0000 (0:00:00.303) 0:01:15.804 ********** 2026-04-05 00:53:27.232477 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:53:27.232487 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:53:27.232497 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:53:27.232506 | orchestrator | 2026-04-05 00:53:27.232517 | orchestrator | TASK [ovn-db : Get OVN NB database information] ******************************** 2026-04-05 00:53:27.232526 | orchestrator | Sunday 05 April 2026 00:52:04 +0000 (0:00:00.867) 0:01:16.671 ********** 2026-04-05 00:53:27.232536 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:53:27.232546 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:53:27.232555 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:53:27.232565 | orchestrator | 2026-04-05 00:53:27.232575 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB leader/follower role] ************** 2026-04-05 00:53:27.232585 | orchestrator | Sunday 05 April 2026 00:52:05 +0000 (0:00:00.418) 0:01:17.090 ********** 2026-04-05 00:53:27.232595 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:53:27.232612 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:53:27.232628 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:53:27.232645 | orchestrator | 2026-04-05 00:53:27.232669 | orchestrator | TASK [ovn-db : Fail on existing OVN NB cluster with no leader] ***************** 2026-04-05 00:53:27.232689 | orchestrator | Sunday 05 April 2026 00:52:05 +0000 (0:00:00.321) 0:01:17.412 ********** 2026-04-05 00:53:27.232703 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:53:27.232717 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:53:27.232745 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:53:27.232760 | orchestrator | 2026-04-05 00:53:27.232775 | orchestrator | TASK [ovn-db : Check if running on all OVN SB DB hosts] ************************ 2026-04-05 00:53:27.232832 | orchestrator | Sunday 05 April 2026 00:52:06 +0000 (0:00:00.368) 0:01:17.780 ********** 2026-04-05 00:53:27.232847 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:53:27.232863 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:53:27.232879 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:53:27.232894 | orchestrator | 2026-04-05 00:53:27.232910 | orchestrator | TASK [ovn-db : Check OVN SB service port liveness] ***************************** 2026-04-05 00:53:27.232927 | orchestrator | Sunday 05 April 2026 00:52:06 +0000 (0:00:00.558) 0:01:18.339 ********** 2026-04-05 00:53:27.232942 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:53:27.232958 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:53:27.232974 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:53:27.232988 | orchestrator | 2026-04-05 00:53:27.233004 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB service port liveness] ************* 2026-04-05 00:53:27.233020 | orchestrator | Sunday 05 April 2026 00:52:06 +0000 (0:00:00.286) 0:01:18.626 ********** 2026-04-05 00:53:27.233037 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:53:27.233052 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:53:27.233062 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:53:27.233072 | orchestrator | 2026-04-05 00:53:27.233082 | orchestrator | TASK [ovn-db : Get OVN SB database information] ******************************** 2026-04-05 00:53:27.233091 | orchestrator | Sunday 05 April 2026 00:52:07 +0000 (0:00:00.324) 0:01:18.950 ********** 2026-04-05 00:53:27.233101 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:53:27.233111 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:53:27.233120 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:53:27.233130 | orchestrator | 2026-04-05 00:53:27.233140 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB leader/follower role] ************** 2026-04-05 00:53:27.233149 | orchestrator | Sunday 05 April 2026 00:52:07 +0000 (0:00:00.321) 0:01:19.272 ********** 2026-04-05 00:53:27.233159 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:53:27.233168 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:53:27.233177 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:53:27.233187 | orchestrator | 2026-04-05 00:53:27.233196 | orchestrator | TASK [ovn-db : Fail on existing OVN SB cluster with no leader] ***************** 2026-04-05 00:53:27.233206 | orchestrator | Sunday 05 April 2026 00:52:08 +0000 (0:00:00.471) 0:01:19.743 ********** 2026-04-05 00:53:27.233216 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:53:27.233226 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:53:27.233246 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:53:27.233256 | orchestrator | 2026-04-05 00:53:27.233266 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2026-04-05 00:53:27.233276 | orchestrator | Sunday 05 April 2026 00:52:08 +0000 (0:00:00.276) 0:01:20.019 ********** 2026-04-05 00:53:27.233286 | orchestrator | included: /ansible/roles/ovn-db/tasks/bootstrap-initial.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 00:53:27.233295 | orchestrator | 2026-04-05 00:53:27.233305 | orchestrator | TASK [ovn-db : Set bootstrap args fact for NB (new cluster)] ******************* 2026-04-05 00:53:27.233314 | orchestrator | Sunday 05 April 2026 00:52:08 +0000 (0:00:00.591) 0:01:20.611 ********** 2026-04-05 00:53:27.233324 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:53:27.233333 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:53:27.233343 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:53:27.233352 | orchestrator | 2026-04-05 00:53:27.233362 | orchestrator | TASK [ovn-db : Set bootstrap args fact for SB (new cluster)] ******************* 2026-04-05 00:53:27.233371 | orchestrator | Sunday 05 April 2026 00:52:09 +0000 (0:00:00.909) 0:01:21.520 ********** 2026-04-05 00:53:27.233381 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:53:27.233390 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:53:27.233424 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:53:27.233449 | orchestrator | 2026-04-05 00:53:27.233458 | orchestrator | TASK [ovn-db : Check NB cluster status] **************************************** 2026-04-05 00:53:27.233468 | orchestrator | Sunday 05 April 2026 00:52:10 +0000 (0:00:00.588) 0:01:22.109 ********** 2026-04-05 00:53:27.233478 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:53:27.233488 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:53:27.233498 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:53:27.233507 | orchestrator | 2026-04-05 00:53:27.233518 | orchestrator | TASK [ovn-db : Check SB cluster status] **************************************** 2026-04-05 00:53:27.233527 | orchestrator | Sunday 05 April 2026 00:52:10 +0000 (0:00:00.426) 0:01:22.536 ********** 2026-04-05 00:53:27.233537 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:53:27.233547 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:53:27.233557 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:53:27.233567 | orchestrator | 2026-04-05 00:53:27.233577 | orchestrator | TASK [ovn-db : Remove an old node with the same ip address as the new node in NB DB] *** 2026-04-05 00:53:27.233587 | orchestrator | Sunday 05 April 2026 00:52:11 +0000 (0:00:00.378) 0:01:22.914 ********** 2026-04-05 00:53:27.233597 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:53:27.233607 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:53:27.233617 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:53:27.233626 | orchestrator | 2026-04-05 00:53:27.233636 | orchestrator | TASK [ovn-db : Remove an old node with the same ip address as the new node in SB DB] *** 2026-04-05 00:53:27.233645 | orchestrator | Sunday 05 April 2026 00:52:11 +0000 (0:00:00.510) 0:01:23.424 ********** 2026-04-05 00:53:27.233655 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:53:27.233665 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:53:27.233675 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:53:27.233684 | orchestrator | 2026-04-05 00:53:27.233694 | orchestrator | TASK [ovn-db : Set bootstrap args fact for NB (new member)] ******************** 2026-04-05 00:53:27.233704 | orchestrator | Sunday 05 April 2026 00:52:12 +0000 (0:00:00.569) 0:01:23.994 ********** 2026-04-05 00:53:27.233714 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:53:27.233724 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:53:27.233734 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:53:27.233744 | orchestrator | 2026-04-05 00:53:27.233754 | orchestrator | TASK [ovn-db : Set bootstrap args fact for SB (new member)] ******************** 2026-04-05 00:53:27.233764 | orchestrator | Sunday 05 April 2026 00:52:12 +0000 (0:00:00.329) 0:01:24.324 ********** 2026-04-05 00:53:27.233773 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:53:27.233783 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:53:27.233792 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:53:27.233802 | orchestrator | 2026-04-05 00:53:27.233812 | orchestrator | TASK [ovn-db : Ensuring config directories exist] ****************************** 2026-04-05 00:53:27.233827 | orchestrator | Sunday 05 April 2026 00:52:13 +0000 (0:00:00.385) 0:01:24.709 ********** 2026-04-05 00:53:27.233838 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 00:53:27.233851 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 00:53:27.233861 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 00:53:27.233885 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 00:53:27.233897 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 00:53:27.233908 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 00:53:27.233919 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 00:53:27.233929 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 00:53:27.233939 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 00:53:27.233949 | orchestrator | 2026-04-05 00:53:27.233959 | orchestrator | TASK [ovn-db : Copying over config.json files for services] ******************** 2026-04-05 00:53:27.233970 | orchestrator | Sunday 05 April 2026 00:52:14 +0000 (0:00:01.649) 0:01:26.358 ********** 2026-04-05 00:53:27.233984 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 00:53:27.233995 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 00:53:27.234005 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 00:53:27.234075 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 00:53:27.234097 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 00:53:27.234108 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 00:53:27.234118 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 00:53:27.234128 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 00:53:27.234139 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 00:53:27.234148 | orchestrator | 2026-04-05 00:53:27.234158 | orchestrator | TASK [ovn-db : Check ovn containers] ******************************************* 2026-04-05 00:53:27.234168 | orchestrator | Sunday 05 April 2026 00:52:18 +0000 (0:00:04.169) 0:01:30.527 ********** 2026-04-05 00:53:27.234178 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 00:53:27.234187 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 00:53:27.234204 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 00:53:27.234214 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 00:53:27.234235 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 00:53:27.234253 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 00:53:27.234263 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 00:53:27.234273 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 00:53:27.234283 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 00:53:27.234293 | orchestrator | 2026-04-05 00:53:27.234303 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-04-05 00:53:27.234313 | orchestrator | Sunday 05 April 2026 00:52:20 +0000 (0:00:02.028) 0:01:32.556 ********** 2026-04-05 00:53:27.234323 | orchestrator | 2026-04-05 00:53:27.234332 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-04-05 00:53:27.234341 | orchestrator | Sunday 05 April 2026 00:52:20 +0000 (0:00:00.070) 0:01:32.627 ********** 2026-04-05 00:53:27.234351 | orchestrator | 2026-04-05 00:53:27.234360 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-04-05 00:53:27.234370 | orchestrator | Sunday 05 April 2026 00:52:21 +0000 (0:00:00.070) 0:01:32.698 ********** 2026-04-05 00:53:27.234379 | orchestrator | 2026-04-05 00:53:27.234389 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-nb-db container] ************************* 2026-04-05 00:53:27.234416 | orchestrator | Sunday 05 April 2026 00:52:21 +0000 (0:00:00.080) 0:01:32.778 ********** 2026-04-05 00:53:27.234427 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:53:27.234437 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:53:27.234447 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:53:27.234457 | orchestrator | 2026-04-05 00:53:27.234468 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db container] ************************* 2026-04-05 00:53:27.234487 | orchestrator | Sunday 05 April 2026 00:52:28 +0000 (0:00:07.667) 0:01:40.445 ********** 2026-04-05 00:53:27.234506 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:53:27.234533 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:53:27.234562 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:53:27.234580 | orchestrator | 2026-04-05 00:53:27.234598 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-northd container] ************************ 2026-04-05 00:53:27.234614 | orchestrator | Sunday 05 April 2026 00:52:35 +0000 (0:00:06.917) 0:01:47.363 ********** 2026-04-05 00:53:27.234630 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:53:27.234646 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:53:27.234664 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:53:27.234681 | orchestrator | 2026-04-05 00:53:27.234707 | orchestrator | TASK [ovn-db : Wait for leader election] *************************************** 2026-04-05 00:53:27.234727 | orchestrator | Sunday 05 April 2026 00:52:43 +0000 (0:00:07.778) 0:01:55.142 ********** 2026-04-05 00:53:27.234744 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:53:27.234761 | orchestrator | 2026-04-05 00:53:27.234780 | orchestrator | TASK [ovn-db : Get OVN_Northbound cluster leader] ****************************** 2026-04-05 00:53:27.234798 | orchestrator | Sunday 05 April 2026 00:52:43 +0000 (0:00:00.270) 0:01:55.412 ********** 2026-04-05 00:53:27.234816 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:53:27.234827 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:53:27.234837 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:53:27.234846 | orchestrator | 2026-04-05 00:53:27.234856 | orchestrator | TASK [ovn-db : Configure OVN NB connection settings] *************************** 2026-04-05 00:53:27.234866 | orchestrator | Sunday 05 April 2026 00:52:44 +0000 (0:00:01.025) 0:01:56.438 ********** 2026-04-05 00:53:27.234876 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:53:27.234885 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:53:27.234895 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:53:27.234905 | orchestrator | 2026-04-05 00:53:27.234914 | orchestrator | TASK [ovn-db : Get OVN_Southbound cluster leader] ****************************** 2026-04-05 00:53:27.234924 | orchestrator | Sunday 05 April 2026 00:52:45 +0000 (0:00:00.603) 0:01:57.042 ********** 2026-04-05 00:53:27.234934 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:53:27.234944 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:53:27.234954 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:53:27.234963 | orchestrator | 2026-04-05 00:53:27.234973 | orchestrator | TASK [ovn-db : Configure OVN SB connection settings] *************************** 2026-04-05 00:53:27.234983 | orchestrator | Sunday 05 April 2026 00:52:46 +0000 (0:00:01.158) 0:01:58.200 ********** 2026-04-05 00:53:27.234992 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:53:27.235002 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:53:27.235011 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:53:27.235021 | orchestrator | 2026-04-05 00:53:27.235030 | orchestrator | TASK [ovn-db : Wait for ovn-nb-db] ********************************************* 2026-04-05 00:53:27.235040 | orchestrator | Sunday 05 April 2026 00:52:47 +0000 (0:00:00.707) 0:01:58.907 ********** 2026-04-05 00:53:27.235050 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:53:27.235059 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:53:27.235079 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:53:27.235089 | orchestrator | 2026-04-05 00:53:27.235099 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db] ********************************************* 2026-04-05 00:53:27.235109 | orchestrator | Sunday 05 April 2026 00:52:48 +0000 (0:00:00.787) 0:01:59.695 ********** 2026-04-05 00:53:27.235119 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:53:27.235128 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:53:27.235138 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:53:27.235147 | orchestrator | 2026-04-05 00:53:27.235157 | orchestrator | TASK [ovn-db : Unset bootstrap args fact] ************************************** 2026-04-05 00:53:27.235166 | orchestrator | Sunday 05 April 2026 00:52:48 +0000 (0:00:00.873) 0:02:00.569 ********** 2026-04-05 00:53:27.235176 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:53:27.235185 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:53:27.235194 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:53:27.235204 | orchestrator | 2026-04-05 00:53:27.235213 | orchestrator | TASK [ovn-db : Ensuring config directories exist] ****************************** 2026-04-05 00:53:27.235223 | orchestrator | Sunday 05 April 2026 00:52:49 +0000 (0:00:00.539) 0:02:01.109 ********** 2026-04-05 00:53:27.235242 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 00:53:27.235253 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 00:53:27.235263 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 00:53:27.235273 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 00:53:27.235284 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 00:53:27.235299 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 00:53:27.235310 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 00:53:27.235319 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 00:53:27.235342 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 00:53:27.235352 | orchestrator | 2026-04-05 00:53:27.235362 | orchestrator | TASK [ovn-db : Copying over config.json files for services] ******************** 2026-04-05 00:53:27.235372 | orchestrator | Sunday 05 April 2026 00:52:51 +0000 (0:00:01.714) 0:02:02.823 ********** 2026-04-05 00:53:27.235388 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 00:53:27.235548 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 00:53:27.235583 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 00:53:27.235594 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 00:53:27.235606 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 00:53:27.235616 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 00:53:27.235635 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 00:53:27.235646 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 00:53:27.235656 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 00:53:27.235667 | orchestrator | 2026-04-05 00:53:27.235685 | orchestrator | TASK [ovn-db : Check ovn containers] ******************************************* 2026-04-05 00:53:27.235705 | orchestrator | Sunday 05 April 2026 00:52:55 +0000 (0:00:04.618) 0:02:07.441 ********** 2026-04-05 00:53:27.235746 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 00:53:27.235777 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 00:53:27.235793 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 00:53:27.235810 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 00:53:27.235826 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 00:53:27.235840 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 00:53:27.235850 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 00:53:27.235867 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 00:53:27.235878 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 00:53:27.235890 | orchestrator | 2026-04-05 00:53:27.235901 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-04-05 00:53:27.235911 | orchestrator | Sunday 05 April 2026 00:52:59 +0000 (0:00:03.376) 0:02:10.817 ********** 2026-04-05 00:53:27.235923 | orchestrator | 2026-04-05 00:53:27.235935 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-04-05 00:53:27.235947 | orchestrator | Sunday 05 April 2026 00:52:59 +0000 (0:00:00.066) 0:02:10.884 ********** 2026-04-05 00:53:27.235973 | orchestrator | 2026-04-05 00:53:27.235982 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-04-05 00:53:27.235988 | orchestrator | Sunday 05 April 2026 00:52:59 +0000 (0:00:00.066) 0:02:10.951 ********** 2026-04-05 00:53:27.235995 | orchestrator | 2026-04-05 00:53:27.236001 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-nb-db container] ************************* 2026-04-05 00:53:27.236008 | orchestrator | Sunday 05 April 2026 00:52:59 +0000 (0:00:00.359) 0:02:11.310 ********** 2026-04-05 00:53:27.236015 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:53:27.236023 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:53:27.236029 | orchestrator | 2026-04-05 00:53:27.236042 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db container] ************************* 2026-04-05 00:53:27.236050 | orchestrator | Sunday 05 April 2026 00:53:06 +0000 (0:00:06.619) 0:02:17.930 ********** 2026-04-05 00:53:27.236056 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:53:27.236063 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:53:27.236069 | orchestrator | 2026-04-05 00:53:27.236076 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-northd container] ************************ 2026-04-05 00:53:27.236083 | orchestrator | Sunday 05 April 2026 00:53:12 +0000 (0:00:06.284) 0:02:24.215 ********** 2026-04-05 00:53:27.236089 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:53:27.236096 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:53:27.236103 | orchestrator | 2026-04-05 00:53:27.236109 | orchestrator | TASK [ovn-db : Wait for leader election] *************************************** 2026-04-05 00:53:27.236116 | orchestrator | Sunday 05 April 2026 00:53:18 +0000 (0:00:06.303) 0:02:30.518 ********** 2026-04-05 00:53:27.236123 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:53:27.236130 | orchestrator | 2026-04-05 00:53:27.236137 | orchestrator | TASK [ovn-db : Get OVN_Northbound cluster leader] ****************************** 2026-04-05 00:53:27.236143 | orchestrator | Sunday 05 April 2026 00:53:18 +0000 (0:00:00.125) 0:02:30.644 ********** 2026-04-05 00:53:27.236150 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:53:27.236157 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:53:27.236163 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:53:27.236170 | orchestrator | 2026-04-05 00:53:27.236177 | orchestrator | TASK [ovn-db : Configure OVN NB connection settings] *************************** 2026-04-05 00:53:27.236183 | orchestrator | Sunday 05 April 2026 00:53:19 +0000 (0:00:00.823) 0:02:31.467 ********** 2026-04-05 00:53:27.236190 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:53:27.236196 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:53:27.236203 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:53:27.236210 | orchestrator | 2026-04-05 00:53:27.236217 | orchestrator | TASK [ovn-db : Get OVN_Southbound cluster leader] ****************************** 2026-04-05 00:53:27.236223 | orchestrator | Sunday 05 April 2026 00:53:20 +0000 (0:00:00.773) 0:02:32.240 ********** 2026-04-05 00:53:27.236230 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:53:27.236237 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:53:27.236243 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:53:27.236250 | orchestrator | 2026-04-05 00:53:27.236257 | orchestrator | TASK [ovn-db : Configure OVN SB connection settings] *************************** 2026-04-05 00:53:27.236263 | orchestrator | Sunday 05 April 2026 00:53:21 +0000 (0:00:00.976) 0:02:33.217 ********** 2026-04-05 00:53:27.236270 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:53:27.236276 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:53:27.236283 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:53:27.236290 | orchestrator | 2026-04-05 00:53:27.236296 | orchestrator | TASK [ovn-db : Wait for ovn-nb-db] ********************************************* 2026-04-05 00:53:27.236303 | orchestrator | Sunday 05 April 2026 00:53:22 +0000 (0:00:00.674) 0:02:33.892 ********** 2026-04-05 00:53:27.236309 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:53:27.236316 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:53:27.236323 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:53:27.236329 | orchestrator | 2026-04-05 00:53:27.236336 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db] ********************************************* 2026-04-05 00:53:27.236348 | orchestrator | Sunday 05 April 2026 00:53:23 +0000 (0:00:00.888) 0:02:34.780 ********** 2026-04-05 00:53:27.236354 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:53:27.236361 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:53:27.236367 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:53:27.236374 | orchestrator | 2026-04-05 00:53:27.236381 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-05 00:53:27.236388 | orchestrator | testbed-node-0 : ok=44  changed=18  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2026-04-05 00:53:27.236425 | orchestrator | testbed-node-1 : ok=43  changed=19  unreachable=0 failed=0 skipped=22  rescued=0 ignored=0 2026-04-05 00:53:27.236440 | orchestrator | testbed-node-2 : ok=43  changed=19  unreachable=0 failed=0 skipped=22  rescued=0 ignored=0 2026-04-05 00:53:27.236448 | orchestrator | testbed-node-3 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-05 00:53:27.236455 | orchestrator | testbed-node-4 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-05 00:53:27.236462 | orchestrator | testbed-node-5 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-05 00:53:27.236468 | orchestrator | 2026-04-05 00:53:27.236475 | orchestrator | 2026-04-05 00:53:27.236482 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-05 00:53:27.236489 | orchestrator | Sunday 05 April 2026 00:53:24 +0000 (0:00:01.728) 0:02:36.508 ********** 2026-04-05 00:53:27.236495 | orchestrator | =============================================================================== 2026-04-05 00:53:27.236502 | orchestrator | ovn-controller : Restart ovn-controller container ---------------------- 28.77s 2026-04-05 00:53:27.236508 | orchestrator | ovn-controller : Configure OVN in OVSDB -------------------------------- 21.19s 2026-04-05 00:53:27.236515 | orchestrator | ovn-db : Restart ovn-nb-db container ----------------------------------- 14.29s 2026-04-05 00:53:27.236522 | orchestrator | ovn-db : Restart ovn-northd container ---------------------------------- 14.08s 2026-04-05 00:53:27.236528 | orchestrator | ovn-db : Restart ovn-sb-db container ----------------------------------- 13.20s 2026-04-05 00:53:27.236535 | orchestrator | ovn-db : Copying over config.json files for services -------------------- 4.62s 2026-04-05 00:53:27.236542 | orchestrator | ovn-db : Copying over config.json files for services -------------------- 4.17s 2026-04-05 00:53:27.236553 | orchestrator | ovn-controller : Reload systemd config ---------------------------------- 4.11s 2026-04-05 00:53:27.236560 | orchestrator | ovn-db : Check ovn containers ------------------------------------------- 3.38s 2026-04-05 00:53:27.236567 | orchestrator | ovn-controller : Create br-int bridge on OpenvSwitch -------------------- 2.87s 2026-04-05 00:53:27.236573 | orchestrator | ovn-controller : Check ovn-controller containers ------------------------ 2.40s 2026-04-05 00:53:27.236580 | orchestrator | ovn-controller : Copying over config.json files for services ------------ 2.22s 2026-04-05 00:53:27.236587 | orchestrator | ovn-controller : Copying over systemd override -------------------------- 2.21s 2026-04-05 00:53:27.236593 | orchestrator | ovn-db : Check ovn containers ------------------------------------------- 2.03s 2026-04-05 00:53:27.236600 | orchestrator | ovn-controller : Ensuring config directories exist ---------------------- 1.84s 2026-04-05 00:53:27.236606 | orchestrator | ovn-controller : Ensuring systemd override directory exists ------------- 1.75s 2026-04-05 00:53:27.236613 | orchestrator | ovn-db : Wait for ovn-sb-db --------------------------------------------- 1.73s 2026-04-05 00:53:27.236619 | orchestrator | ovn-db : Ensuring config directories exist ------------------------------ 1.71s 2026-04-05 00:53:27.236626 | orchestrator | ovn-db : Ensuring config directories exist ------------------------------ 1.65s 2026-04-05 00:53:27.236633 | orchestrator | ovn-db : Get OVN_Southbound cluster leader ------------------------------ 1.16s 2026-04-05 00:53:27.236646 | orchestrator | 2026-04-05 00:53:27 | INFO  | Task bcbda999-be68-4903-8396-551ca0f1c657 is in state STARTED 2026-04-05 00:53:27.236654 | orchestrator | 2026-04-05 00:53:27 | INFO  | Task 1bc5b308-4c5f-49b2-92a5-1a49ed90e82d is in state STARTED 2026-04-05 00:53:27.236661 | orchestrator | 2026-04-05 00:53:27 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:53:30.272815 | orchestrator | 2026-04-05 00:53:30 | INFO  | Task bcbda999-be68-4903-8396-551ca0f1c657 is in state STARTED 2026-04-05 00:53:30.275794 | orchestrator | 2026-04-05 00:53:30 | INFO  | Task 1bc5b308-4c5f-49b2-92a5-1a49ed90e82d is in state STARTED 2026-04-05 00:53:30.275878 | orchestrator | 2026-04-05 00:53:30 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:53:33.341256 | orchestrator | 2026-04-05 00:53:33 | INFO  | Task bcbda999-be68-4903-8396-551ca0f1c657 is in state STARTED 2026-04-05 00:53:33.341363 | orchestrator | 2026-04-05 00:53:33 | INFO  | Task 1bc5b308-4c5f-49b2-92a5-1a49ed90e82d is in state STARTED 2026-04-05 00:53:33.341379 | orchestrator | 2026-04-05 00:53:33 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:53:36.370515 | orchestrator | 2026-04-05 00:53:36 | INFO  | Task bcbda999-be68-4903-8396-551ca0f1c657 is in state STARTED 2026-04-05 00:53:36.372678 | orchestrator | 2026-04-05 00:53:36 | INFO  | Task 1bc5b308-4c5f-49b2-92a5-1a49ed90e82d is in state STARTED 2026-04-05 00:53:36.372738 | orchestrator | 2026-04-05 00:53:36 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:53:39.420727 | orchestrator | 2026-04-05 00:53:39 | INFO  | Task bcbda999-be68-4903-8396-551ca0f1c657 is in state STARTED 2026-04-05 00:53:39.422301 | orchestrator | 2026-04-05 00:53:39 | INFO  | Task 1bc5b308-4c5f-49b2-92a5-1a49ed90e82d is in state STARTED 2026-04-05 00:53:39.422893 | orchestrator | 2026-04-05 00:53:39 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:53:42.474295 | orchestrator | 2026-04-05 00:53:42 | INFO  | Task bcbda999-be68-4903-8396-551ca0f1c657 is in state STARTED 2026-04-05 00:53:42.477067 | orchestrator | 2026-04-05 00:53:42 | INFO  | Task 1bc5b308-4c5f-49b2-92a5-1a49ed90e82d is in state STARTED 2026-04-05 00:53:42.477132 | orchestrator | 2026-04-05 00:53:42 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:53:45.510894 | orchestrator | 2026-04-05 00:53:45 | INFO  | Task bcbda999-be68-4903-8396-551ca0f1c657 is in state STARTED 2026-04-05 00:53:45.511337 | orchestrator | 2026-04-05 00:53:45 | INFO  | Task 1bc5b308-4c5f-49b2-92a5-1a49ed90e82d is in state STARTED 2026-04-05 00:53:45.511371 | orchestrator | 2026-04-05 00:53:45 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:53:48.562914 | orchestrator | 2026-04-05 00:53:48 | INFO  | Task bcbda999-be68-4903-8396-551ca0f1c657 is in state STARTED 2026-04-05 00:53:48.563152 | orchestrator | 2026-04-05 00:53:48 | INFO  | Task 1bc5b308-4c5f-49b2-92a5-1a49ed90e82d is in state STARTED 2026-04-05 00:53:48.563470 | orchestrator | 2026-04-05 00:53:48 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:53:51.610749 | orchestrator | 2026-04-05 00:53:51 | INFO  | Task bcbda999-be68-4903-8396-551ca0f1c657 is in state STARTED 2026-04-05 00:53:51.612512 | orchestrator | 2026-04-05 00:53:51 | INFO  | Task 1bc5b308-4c5f-49b2-92a5-1a49ed90e82d is in state STARTED 2026-04-05 00:53:51.612552 | orchestrator | 2026-04-05 00:53:51 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:53:54.652753 | orchestrator | 2026-04-05 00:53:54 | INFO  | Task bcbda999-be68-4903-8396-551ca0f1c657 is in state STARTED 2026-04-05 00:53:54.655355 | orchestrator | 2026-04-05 00:53:54 | INFO  | Task 1bc5b308-4c5f-49b2-92a5-1a49ed90e82d is in state STARTED 2026-04-05 00:53:54.655501 | orchestrator | 2026-04-05 00:53:54 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:53:57.687886 | orchestrator | 2026-04-05 00:53:57 | INFO  | Task bcbda999-be68-4903-8396-551ca0f1c657 is in state STARTED 2026-04-05 00:53:57.687969 | orchestrator | 2026-04-05 00:53:57 | INFO  | Task 1bc5b308-4c5f-49b2-92a5-1a49ed90e82d is in state STARTED 2026-04-05 00:53:57.687975 | orchestrator | 2026-04-05 00:53:57 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:54:00.724699 | orchestrator | 2026-04-05 00:54:00 | INFO  | Task bcbda999-be68-4903-8396-551ca0f1c657 is in state STARTED 2026-04-05 00:54:00.727665 | orchestrator | 2026-04-05 00:54:00 | INFO  | Task 1bc5b308-4c5f-49b2-92a5-1a49ed90e82d is in state STARTED 2026-04-05 00:54:00.727714 | orchestrator | 2026-04-05 00:54:00 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:54:03.759557 | orchestrator | 2026-04-05 00:54:03 | INFO  | Task bcbda999-be68-4903-8396-551ca0f1c657 is in state STARTED 2026-04-05 00:54:03.759684 | orchestrator | 2026-04-05 00:54:03 | INFO  | Task 1bc5b308-4c5f-49b2-92a5-1a49ed90e82d is in state STARTED 2026-04-05 00:54:03.759709 | orchestrator | 2026-04-05 00:54:03 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:54:06.824638 | orchestrator | 2026-04-05 00:54:06 | INFO  | Task bcbda999-be68-4903-8396-551ca0f1c657 is in state STARTED 2026-04-05 00:54:06.829043 | orchestrator | 2026-04-05 00:54:06 | INFO  | Task 1bc5b308-4c5f-49b2-92a5-1a49ed90e82d is in state STARTED 2026-04-05 00:54:06.829118 | orchestrator | 2026-04-05 00:54:06 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:54:09.876190 | orchestrator | 2026-04-05 00:54:09 | INFO  | Task bcbda999-be68-4903-8396-551ca0f1c657 is in state STARTED 2026-04-05 00:54:09.880806 | orchestrator | 2026-04-05 00:54:09 | INFO  | Task 1bc5b308-4c5f-49b2-92a5-1a49ed90e82d is in state STARTED 2026-04-05 00:54:09.880908 | orchestrator | 2026-04-05 00:54:09 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:54:12.922891 | orchestrator | 2026-04-05 00:54:12 | INFO  | Task bcbda999-be68-4903-8396-551ca0f1c657 is in state STARTED 2026-04-05 00:54:12.923685 | orchestrator | 2026-04-05 00:54:12 | INFO  | Task 1bc5b308-4c5f-49b2-92a5-1a49ed90e82d is in state STARTED 2026-04-05 00:54:12.923824 | orchestrator | 2026-04-05 00:54:12 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:54:15.976663 | orchestrator | 2026-04-05 00:54:15 | INFO  | Task bcbda999-be68-4903-8396-551ca0f1c657 is in state STARTED 2026-04-05 00:54:15.977684 | orchestrator | 2026-04-05 00:54:15 | INFO  | Task 1bc5b308-4c5f-49b2-92a5-1a49ed90e82d is in state STARTED 2026-04-05 00:54:15.977796 | orchestrator | 2026-04-05 00:54:15 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:54:19.022568 | orchestrator | 2026-04-05 00:54:19 | INFO  | Task bcbda999-be68-4903-8396-551ca0f1c657 is in state STARTED 2026-04-05 00:54:19.023330 | orchestrator | 2026-04-05 00:54:19 | INFO  | Task 1bc5b308-4c5f-49b2-92a5-1a49ed90e82d is in state STARTED 2026-04-05 00:54:19.023440 | orchestrator | 2026-04-05 00:54:19 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:54:22.085325 | orchestrator | 2026-04-05 00:54:22 | INFO  | Task bcbda999-be68-4903-8396-551ca0f1c657 is in state STARTED 2026-04-05 00:54:22.087155 | orchestrator | 2026-04-05 00:54:22 | INFO  | Task 1bc5b308-4c5f-49b2-92a5-1a49ed90e82d is in state STARTED 2026-04-05 00:54:22.087218 | orchestrator | 2026-04-05 00:54:22 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:54:25.129279 | orchestrator | 2026-04-05 00:54:25 | INFO  | Task bcbda999-be68-4903-8396-551ca0f1c657 is in state STARTED 2026-04-05 00:54:25.130573 | orchestrator | 2026-04-05 00:54:25 | INFO  | Task 1bc5b308-4c5f-49b2-92a5-1a49ed90e82d is in state STARTED 2026-04-05 00:54:25.131322 | orchestrator | 2026-04-05 00:54:25 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:54:28.178791 | orchestrator | 2026-04-05 00:54:28 | INFO  | Task bcbda999-be68-4903-8396-551ca0f1c657 is in state STARTED 2026-04-05 00:54:28.180609 | orchestrator | 2026-04-05 00:54:28 | INFO  | Task 1bc5b308-4c5f-49b2-92a5-1a49ed90e82d is in state STARTED 2026-04-05 00:54:28.181092 | orchestrator | 2026-04-05 00:54:28 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:54:31.225698 | orchestrator | 2026-04-05 00:54:31 | INFO  | Task bcbda999-be68-4903-8396-551ca0f1c657 is in state STARTED 2026-04-05 00:54:31.226321 | orchestrator | 2026-04-05 00:54:31 | INFO  | Task 1bc5b308-4c5f-49b2-92a5-1a49ed90e82d is in state STARTED 2026-04-05 00:54:31.226387 | orchestrator | 2026-04-05 00:54:31 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:54:34.275454 | orchestrator | 2026-04-05 00:54:34 | INFO  | Task bcbda999-be68-4903-8396-551ca0f1c657 is in state STARTED 2026-04-05 00:54:34.278580 | orchestrator | 2026-04-05 00:54:34 | INFO  | Task 1bc5b308-4c5f-49b2-92a5-1a49ed90e82d is in state STARTED 2026-04-05 00:54:34.278672 | orchestrator | 2026-04-05 00:54:34 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:54:37.328514 | orchestrator | 2026-04-05 00:54:37 | INFO  | Task bcbda999-be68-4903-8396-551ca0f1c657 is in state STARTED 2026-04-05 00:54:37.329582 | orchestrator | 2026-04-05 00:54:37 | INFO  | Task 1bc5b308-4c5f-49b2-92a5-1a49ed90e82d is in state STARTED 2026-04-05 00:54:37.329634 | orchestrator | 2026-04-05 00:54:37 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:54:40.378640 | orchestrator | 2026-04-05 00:54:40 | INFO  | Task bcbda999-be68-4903-8396-551ca0f1c657 is in state STARTED 2026-04-05 00:54:40.380678 | orchestrator | 2026-04-05 00:54:40 | INFO  | Task 1bc5b308-4c5f-49b2-92a5-1a49ed90e82d is in state STARTED 2026-04-05 00:54:40.380734 | orchestrator | 2026-04-05 00:54:40 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:54:43.421941 | orchestrator | 2026-04-05 00:54:43 | INFO  | Task bcbda999-be68-4903-8396-551ca0f1c657 is in state STARTED 2026-04-05 00:54:43.423130 | orchestrator | 2026-04-05 00:54:43 | INFO  | Task 1bc5b308-4c5f-49b2-92a5-1a49ed90e82d is in state STARTED 2026-04-05 00:54:43.423218 | orchestrator | 2026-04-05 00:54:43 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:54:46.469737 | orchestrator | 2026-04-05 00:54:46 | INFO  | Task bcbda999-be68-4903-8396-551ca0f1c657 is in state STARTED 2026-04-05 00:54:46.471111 | orchestrator | 2026-04-05 00:54:46 | INFO  | Task 1bc5b308-4c5f-49b2-92a5-1a49ed90e82d is in state STARTED 2026-04-05 00:54:46.471151 | orchestrator | 2026-04-05 00:54:46 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:54:49.509870 | orchestrator | 2026-04-05 00:54:49 | INFO  | Task bcbda999-be68-4903-8396-551ca0f1c657 is in state STARTED 2026-04-05 00:54:49.512445 | orchestrator | 2026-04-05 00:54:49 | INFO  | Task 1bc5b308-4c5f-49b2-92a5-1a49ed90e82d is in state STARTED 2026-04-05 00:54:49.512506 | orchestrator | 2026-04-05 00:54:49 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:54:52.561778 | orchestrator | 2026-04-05 00:54:52 | INFO  | Task bcbda999-be68-4903-8396-551ca0f1c657 is in state STARTED 2026-04-05 00:54:52.563342 | orchestrator | 2026-04-05 00:54:52 | INFO  | Task 1bc5b308-4c5f-49b2-92a5-1a49ed90e82d is in state STARTED 2026-04-05 00:54:52.563370 | orchestrator | 2026-04-05 00:54:52 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:54:55.608056 | orchestrator | 2026-04-05 00:54:55 | INFO  | Task bcbda999-be68-4903-8396-551ca0f1c657 is in state STARTED 2026-04-05 00:54:55.610277 | orchestrator | 2026-04-05 00:54:55 | INFO  | Task 1bc5b308-4c5f-49b2-92a5-1a49ed90e82d is in state STARTED 2026-04-05 00:54:55.610422 | orchestrator | 2026-04-05 00:54:55 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:54:58.658139 | orchestrator | 2026-04-05 00:54:58 | INFO  | Task bcbda999-be68-4903-8396-551ca0f1c657 is in state STARTED 2026-04-05 00:54:58.659228 | orchestrator | 2026-04-05 00:54:58 | INFO  | Task 1bc5b308-4c5f-49b2-92a5-1a49ed90e82d is in state STARTED 2026-04-05 00:54:58.659494 | orchestrator | 2026-04-05 00:54:58 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:55:01.699894 | orchestrator | 2026-04-05 00:55:01 | INFO  | Task bcbda999-be68-4903-8396-551ca0f1c657 is in state STARTED 2026-04-05 00:55:01.699984 | orchestrator | 2026-04-05 00:55:01 | INFO  | Task 1bc5b308-4c5f-49b2-92a5-1a49ed90e82d is in state STARTED 2026-04-05 00:55:01.699995 | orchestrator | 2026-04-05 00:55:01 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:55:04.748924 | orchestrator | 2026-04-05 00:55:04 | INFO  | Task bcbda999-be68-4903-8396-551ca0f1c657 is in state STARTED 2026-04-05 00:55:04.749794 | orchestrator | 2026-04-05 00:55:04 | INFO  | Task 1bc5b308-4c5f-49b2-92a5-1a49ed90e82d is in state STARTED 2026-04-05 00:55:04.749824 | orchestrator | 2026-04-05 00:55:04 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:55:07.797477 | orchestrator | 2026-04-05 00:55:07 | INFO  | Task bcbda999-be68-4903-8396-551ca0f1c657 is in state STARTED 2026-04-05 00:55:07.798148 | orchestrator | 2026-04-05 00:55:07 | INFO  | Task 1bc5b308-4c5f-49b2-92a5-1a49ed90e82d is in state STARTED 2026-04-05 00:55:07.798189 | orchestrator | 2026-04-05 00:55:07 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:55:10.856755 | orchestrator | 2026-04-05 00:55:10 | INFO  | Task bcbda999-be68-4903-8396-551ca0f1c657 is in state STARTED 2026-04-05 00:55:10.857367 | orchestrator | 2026-04-05 00:55:10 | INFO  | Task 1bc5b308-4c5f-49b2-92a5-1a49ed90e82d is in state STARTED 2026-04-05 00:55:10.857409 | orchestrator | 2026-04-05 00:55:10 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:55:13.907542 | orchestrator | 2026-04-05 00:55:13 | INFO  | Task bcbda999-be68-4903-8396-551ca0f1c657 is in state STARTED 2026-04-05 00:55:13.909725 | orchestrator | 2026-04-05 00:55:13 | INFO  | Task 1bc5b308-4c5f-49b2-92a5-1a49ed90e82d is in state STARTED 2026-04-05 00:55:13.909947 | orchestrator | 2026-04-05 00:55:13 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:55:16.966191 | orchestrator | 2026-04-05 00:55:16 | INFO  | Task bcbda999-be68-4903-8396-551ca0f1c657 is in state STARTED 2026-04-05 00:55:16.968172 | orchestrator | 2026-04-05 00:55:16 | INFO  | Task 1bc5b308-4c5f-49b2-92a5-1a49ed90e82d is in state STARTED 2026-04-05 00:55:16.970160 | orchestrator | 2026-04-05 00:55:16 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:55:20.023140 | orchestrator | 2026-04-05 00:55:20 | INFO  | Task bcbda999-be68-4903-8396-551ca0f1c657 is in state STARTED 2026-04-05 00:55:20.025634 | orchestrator | 2026-04-05 00:55:20 | INFO  | Task 1bc5b308-4c5f-49b2-92a5-1a49ed90e82d is in state STARTED 2026-04-05 00:55:20.025771 | orchestrator | 2026-04-05 00:55:20 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:55:23.079101 | orchestrator | 2026-04-05 00:55:23 | INFO  | Task bcbda999-be68-4903-8396-551ca0f1c657 is in state STARTED 2026-04-05 00:55:23.079566 | orchestrator | 2026-04-05 00:55:23 | INFO  | Task 1bc5b308-4c5f-49b2-92a5-1a49ed90e82d is in state STARTED 2026-04-05 00:55:23.079640 | orchestrator | 2026-04-05 00:55:23 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:55:26.118845 | orchestrator | 2026-04-05 00:55:26 | INFO  | Task bcbda999-be68-4903-8396-551ca0f1c657 is in state STARTED 2026-04-05 00:55:26.119592 | orchestrator | 2026-04-05 00:55:26 | INFO  | Task 1bc5b308-4c5f-49b2-92a5-1a49ed90e82d is in state STARTED 2026-04-05 00:55:26.119662 | orchestrator | 2026-04-05 00:55:26 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:55:29.165058 | orchestrator | 2026-04-05 00:55:29 | INFO  | Task bcbda999-be68-4903-8396-551ca0f1c657 is in state STARTED 2026-04-05 00:55:29.165526 | orchestrator | 2026-04-05 00:55:29 | INFO  | Task 1bc5b308-4c5f-49b2-92a5-1a49ed90e82d is in state STARTED 2026-04-05 00:55:29.165547 | orchestrator | 2026-04-05 00:55:29 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:55:32.222000 | orchestrator | 2026-04-05 00:55:32 | INFO  | Task bcbda999-be68-4903-8396-551ca0f1c657 is in state STARTED 2026-04-05 00:55:32.222916 | orchestrator | 2026-04-05 00:55:32 | INFO  | Task 1bc5b308-4c5f-49b2-92a5-1a49ed90e82d is in state STARTED 2026-04-05 00:55:32.222960 | orchestrator | 2026-04-05 00:55:32 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:55:35.252694 | orchestrator | 2026-04-05 00:55:35 | INFO  | Task bcbda999-be68-4903-8396-551ca0f1c657 is in state STARTED 2026-04-05 00:55:35.254227 | orchestrator | 2026-04-05 00:55:35 | INFO  | Task 1bc5b308-4c5f-49b2-92a5-1a49ed90e82d is in state STARTED 2026-04-05 00:55:35.254446 | orchestrator | 2026-04-05 00:55:35 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:55:38.291369 | orchestrator | 2026-04-05 00:55:38 | INFO  | Task bcbda999-be68-4903-8396-551ca0f1c657 is in state STARTED 2026-04-05 00:55:38.292887 | orchestrator | 2026-04-05 00:55:38 | INFO  | Task 1bc5b308-4c5f-49b2-92a5-1a49ed90e82d is in state STARTED 2026-04-05 00:55:38.292933 | orchestrator | 2026-04-05 00:55:38 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:55:41.329594 | orchestrator | 2026-04-05 00:55:41 | INFO  | Task bcbda999-be68-4903-8396-551ca0f1c657 is in state STARTED 2026-04-05 00:55:41.329704 | orchestrator | 2026-04-05 00:55:41 | INFO  | Task 1bc5b308-4c5f-49b2-92a5-1a49ed90e82d is in state STARTED 2026-04-05 00:55:41.329724 | orchestrator | 2026-04-05 00:55:41 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:55:44.370154 | orchestrator | 2026-04-05 00:55:44 | INFO  | Task bcbda999-be68-4903-8396-551ca0f1c657 is in state STARTED 2026-04-05 00:55:44.370935 | orchestrator | 2026-04-05 00:55:44 | INFO  | Task 1bc5b308-4c5f-49b2-92a5-1a49ed90e82d is in state STARTED 2026-04-05 00:55:44.371518 | orchestrator | 2026-04-05 00:55:44 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:55:47.413011 | orchestrator | 2026-04-05 00:55:47 | INFO  | Task bcbda999-be68-4903-8396-551ca0f1c657 is in state STARTED 2026-04-05 00:55:47.414843 | orchestrator | 2026-04-05 00:55:47 | INFO  | Task 1bc5b308-4c5f-49b2-92a5-1a49ed90e82d is in state STARTED 2026-04-05 00:55:47.414909 | orchestrator | 2026-04-05 00:55:47 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:55:50.468682 | orchestrator | 2026-04-05 00:55:50 | INFO  | Task bcbda999-be68-4903-8396-551ca0f1c657 is in state STARTED 2026-04-05 00:55:50.469677 | orchestrator | 2026-04-05 00:55:50 | INFO  | Task 1bc5b308-4c5f-49b2-92a5-1a49ed90e82d is in state STARTED 2026-04-05 00:55:50.469737 | orchestrator | 2026-04-05 00:55:50 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:55:53.525357 | orchestrator | 2026-04-05 00:55:53 | INFO  | Task bcbda999-be68-4903-8396-551ca0f1c657 is in state STARTED 2026-04-05 00:55:53.526694 | orchestrator | 2026-04-05 00:55:53 | INFO  | Task 1bc5b308-4c5f-49b2-92a5-1a49ed90e82d is in state STARTED 2026-04-05 00:55:53.528112 | orchestrator | 2026-04-05 00:55:53 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:55:56.578080 | orchestrator | 2026-04-05 00:55:56 | INFO  | Task bcbda999-be68-4903-8396-551ca0f1c657 is in state STARTED 2026-04-05 00:55:56.579969 | orchestrator | 2026-04-05 00:55:56 | INFO  | Task 1bc5b308-4c5f-49b2-92a5-1a49ed90e82d is in state STARTED 2026-04-05 00:55:56.580002 | orchestrator | 2026-04-05 00:55:56 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:55:59.618150 | orchestrator | 2026-04-05 00:55:59 | INFO  | Task bcbda999-be68-4903-8396-551ca0f1c657 is in state STARTED 2026-04-05 00:55:59.620287 | orchestrator | 2026-04-05 00:55:59 | INFO  | Task 1bc5b308-4c5f-49b2-92a5-1a49ed90e82d is in state STARTED 2026-04-05 00:55:59.620362 | orchestrator | 2026-04-05 00:55:59 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:56:02.946068 | orchestrator | 2026-04-05 00:56:02 | INFO  | Task bcbda999-be68-4903-8396-551ca0f1c657 is in state STARTED 2026-04-05 00:56:02.946156 | orchestrator | 2026-04-05 00:56:02 | INFO  | Task 1bc5b308-4c5f-49b2-92a5-1a49ed90e82d is in state STARTED 2026-04-05 00:56:02.946170 | orchestrator | 2026-04-05 00:56:02 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:56:05.934597 | orchestrator | 2026-04-05 00:56:05 | INFO  | Task bcbda999-be68-4903-8396-551ca0f1c657 is in state STARTED 2026-04-05 00:56:05.936488 | orchestrator | 2026-04-05 00:56:05 | INFO  | Task 1bc5b308-4c5f-49b2-92a5-1a49ed90e82d is in state STARTED 2026-04-05 00:56:05.937009 | orchestrator | 2026-04-05 00:56:05 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:56:08.977861 | orchestrator | 2026-04-05 00:56:08 | INFO  | Task bcbda999-be68-4903-8396-551ca0f1c657 is in state STARTED 2026-04-05 00:56:08.980416 | orchestrator | 2026-04-05 00:56:08 | INFO  | Task 1bc5b308-4c5f-49b2-92a5-1a49ed90e82d is in state STARTED 2026-04-05 00:56:08.980455 | orchestrator | 2026-04-05 00:56:08 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:56:12.031750 | orchestrator | 2026-04-05 00:56:12 | INFO  | Task bcbda999-be68-4903-8396-551ca0f1c657 is in state STARTED 2026-04-05 00:56:12.031938 | orchestrator | 2026-04-05 00:56:12 | INFO  | Task 1bc5b308-4c5f-49b2-92a5-1a49ed90e82d is in state STARTED 2026-04-05 00:56:12.031968 | orchestrator | 2026-04-05 00:56:12 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:56:15.088728 | orchestrator | 2026-04-05 00:56:15 | INFO  | Task bcbda999-be68-4903-8396-551ca0f1c657 is in state STARTED 2026-04-05 00:56:15.089888 | orchestrator | 2026-04-05 00:56:15 | INFO  | Task 1bc5b308-4c5f-49b2-92a5-1a49ed90e82d is in state STARTED 2026-04-05 00:56:15.089931 | orchestrator | 2026-04-05 00:56:15 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:56:18.134717 | orchestrator | 2026-04-05 00:56:18 | INFO  | Task bcbda999-be68-4903-8396-551ca0f1c657 is in state STARTED 2026-04-05 00:56:18.134897 | orchestrator | 2026-04-05 00:56:18 | INFO  | Task 1bc5b308-4c5f-49b2-92a5-1a49ed90e82d is in state STARTED 2026-04-05 00:56:18.134918 | orchestrator | 2026-04-05 00:56:18 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:56:21.185014 | orchestrator | 2026-04-05 00:56:21 | INFO  | Task bcbda999-be68-4903-8396-551ca0f1c657 is in state STARTED 2026-04-05 00:56:21.189533 | orchestrator | 2026-04-05 00:56:21 | INFO  | Task 1bc5b308-4c5f-49b2-92a5-1a49ed90e82d is in state STARTED 2026-04-05 00:56:21.189607 | orchestrator | 2026-04-05 00:56:21 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:56:24.237002 | orchestrator | 2026-04-05 00:56:24 | INFO  | Task bcbda999-be68-4903-8396-551ca0f1c657 is in state STARTED 2026-04-05 00:56:24.238940 | orchestrator | 2026-04-05 00:56:24 | INFO  | Task 1bc5b308-4c5f-49b2-92a5-1a49ed90e82d is in state STARTED 2026-04-05 00:56:24.239423 | orchestrator | 2026-04-05 00:56:24 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:56:27.273500 | orchestrator | 2026-04-05 00:56:27 | INFO  | Task bcbda999-be68-4903-8396-551ca0f1c657 is in state STARTED 2026-04-05 00:56:27.274979 | orchestrator | 2026-04-05 00:56:27 | INFO  | Task 1bc5b308-4c5f-49b2-92a5-1a49ed90e82d is in state STARTED 2026-04-05 00:56:27.275242 | orchestrator | 2026-04-05 00:56:27 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:56:30.318730 | orchestrator | 2026-04-05 00:56:30 | INFO  | Task bcbda999-be68-4903-8396-551ca0f1c657 is in state STARTED 2026-04-05 00:56:30.318973 | orchestrator | 2026-04-05 00:56:30 | INFO  | Task 1bc5b308-4c5f-49b2-92a5-1a49ed90e82d is in state STARTED 2026-04-05 00:56:30.319198 | orchestrator | 2026-04-05 00:56:30 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:56:33.377118 | orchestrator | 2026-04-05 00:56:33 | INFO  | Task bcbda999-be68-4903-8396-551ca0f1c657 is in state STARTED 2026-04-05 00:56:33.377264 | orchestrator | 2026-04-05 00:56:33 | INFO  | Task b9f9fe4a-095a-46c2-a3d1-d49948e77915 is in state STARTED 2026-04-05 00:56:33.378144 | orchestrator | 2026-04-05 00:56:33 | INFO  | Task 9daa912a-da73-4379-86f2-1d23972e1467 is in state STARTED 2026-04-05 00:56:33.385537 | orchestrator | 2026-04-05 00:56:33 | INFO  | Task 1bc5b308-4c5f-49b2-92a5-1a49ed90e82d is in state SUCCESS 2026-04-05 00:56:33.387497 | orchestrator | 2026-04-05 00:56:33.387849 | orchestrator | 2026-04-05 00:56:33.387888 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-05 00:56:33.387903 | orchestrator | 2026-04-05 00:56:33.387914 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-05 00:56:33.387926 | orchestrator | Sunday 05 April 2026 00:49:31 +0000 (0:00:00.533) 0:00:00.533 ********** 2026-04-05 00:56:33.387937 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:56:33.387948 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:56:33.387959 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:56:33.387970 | orchestrator | 2026-04-05 00:56:33.387980 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-05 00:56:33.387991 | orchestrator | Sunday 05 April 2026 00:49:32 +0000 (0:00:00.493) 0:00:01.026 ********** 2026-04-05 00:56:33.388002 | orchestrator | ok: [testbed-node-0] => (item=enable_loadbalancer_True) 2026-04-05 00:56:33.388013 | orchestrator | ok: [testbed-node-1] => (item=enable_loadbalancer_True) 2026-04-05 00:56:33.388024 | orchestrator | ok: [testbed-node-2] => (item=enable_loadbalancer_True) 2026-04-05 00:56:33.388034 | orchestrator | 2026-04-05 00:56:33.388116 | orchestrator | PLAY [Apply role loadbalancer] ************************************************* 2026-04-05 00:56:33.388131 | orchestrator | 2026-04-05 00:56:33.388144 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2026-04-05 00:56:33.388262 | orchestrator | Sunday 05 April 2026 00:49:33 +0000 (0:00:00.680) 0:00:01.707 ********** 2026-04-05 00:56:33.388277 | orchestrator | included: /ansible/roles/loadbalancer/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 00:56:33.388319 | orchestrator | 2026-04-05 00:56:33.388344 | orchestrator | TASK [loadbalancer : Check IPv6 support] *************************************** 2026-04-05 00:56:33.388356 | orchestrator | Sunday 05 April 2026 00:49:34 +0000 (0:00:01.306) 0:00:03.014 ********** 2026-04-05 00:56:33.388366 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:56:33.388377 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:56:33.388388 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:56:33.388398 | orchestrator | 2026-04-05 00:56:33.388409 | orchestrator | TASK [Setting sysctl values] *************************************************** 2026-04-05 00:56:33.388447 | orchestrator | Sunday 05 April 2026 00:49:36 +0000 (0:00:02.174) 0:00:05.189 ********** 2026-04-05 00:56:33.388466 | orchestrator | included: sysctl for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 00:56:33.388477 | orchestrator | 2026-04-05 00:56:33.388488 | orchestrator | TASK [sysctl : Check IPv6 support] ********************************************* 2026-04-05 00:56:33.388499 | orchestrator | Sunday 05 April 2026 00:49:37 +0000 (0:00:00.679) 0:00:05.868 ********** 2026-04-05 00:56:33.388509 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:56:33.388520 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:56:33.388531 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:56:33.388541 | orchestrator | 2026-04-05 00:56:33.388552 | orchestrator | TASK [sysctl : Setting sysctl values] ****************************************** 2026-04-05 00:56:33.388563 | orchestrator | Sunday 05 April 2026 00:49:38 +0000 (0:00:01.272) 0:00:07.140 ********** 2026-04-05 00:56:33.388574 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2026-04-05 00:56:33.388586 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2026-04-05 00:56:33.388596 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2026-04-05 00:56:33.388607 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2026-04-05 00:56:33.388618 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2026-04-05 00:56:33.388629 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2026-04-05 00:56:33.388683 | orchestrator | ok: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2026-04-05 00:56:33.388695 | orchestrator | ok: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2026-04-05 00:56:33.388706 | orchestrator | ok: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2026-04-05 00:56:33.388717 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2026-04-05 00:56:33.388728 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2026-04-05 00:56:33.388813 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2026-04-05 00:56:33.388824 | orchestrator | 2026-04-05 00:56:33.388835 | orchestrator | TASK [module-load : Load modules] ********************************************** 2026-04-05 00:56:33.388846 | orchestrator | Sunday 05 April 2026 00:49:43 +0000 (0:00:04.569) 0:00:11.710 ********** 2026-04-05 00:56:33.388857 | orchestrator | changed: [testbed-node-1] => (item=ip_vs) 2026-04-05 00:56:33.388868 | orchestrator | changed: [testbed-node-0] => (item=ip_vs) 2026-04-05 00:56:33.388878 | orchestrator | changed: [testbed-node-2] => (item=ip_vs) 2026-04-05 00:56:33.388889 | orchestrator | 2026-04-05 00:56:33.388900 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2026-04-05 00:56:33.388911 | orchestrator | Sunday 05 April 2026 00:49:44 +0000 (0:00:01.214) 0:00:12.924 ********** 2026-04-05 00:56:33.388921 | orchestrator | changed: [testbed-node-0] => (item=ip_vs) 2026-04-05 00:56:33.388956 | orchestrator | changed: [testbed-node-1] => (item=ip_vs) 2026-04-05 00:56:33.388968 | orchestrator | changed: [testbed-node-2] => (item=ip_vs) 2026-04-05 00:56:33.388979 | orchestrator | 2026-04-05 00:56:33.388990 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2026-04-05 00:56:33.389000 | orchestrator | Sunday 05 April 2026 00:49:46 +0000 (0:00:02.581) 0:00:15.506 ********** 2026-04-05 00:56:33.389011 | orchestrator | skipping: [testbed-node-0] => (item=ip_vs)  2026-04-05 00:56:33.389022 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:56:33.389050 | orchestrator | skipping: [testbed-node-1] => (item=ip_vs)  2026-04-05 00:56:33.389063 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:56:33.389073 | orchestrator | skipping: [testbed-node-2] => (item=ip_vs)  2026-04-05 00:56:33.389093 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:56:33.389104 | orchestrator | 2026-04-05 00:56:33.389115 | orchestrator | TASK [loadbalancer : Ensuring config directories exist] ************************ 2026-04-05 00:56:33.389125 | orchestrator | Sunday 05 April 2026 00:49:48 +0000 (0:00:01.215) 0:00:16.721 ********** 2026-04-05 00:56:33.389140 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-04-05 00:56:33.389242 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-04-05 00:56:33.389257 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-05 00:56:33.389269 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-05 00:56:33.389281 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-04-05 00:56:33.389351 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-04-05 00:56:33.389385 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-04-05 00:56:33.389397 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-05 00:56:33.389441 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-04-05 00:56:33.389456 | orchestrator | 2026-04-05 00:56:33.389467 | orchestrator | TASK [loadbalancer : Ensuring haproxy service config subdir exists] ************ 2026-04-05 00:56:33.389478 | orchestrator | Sunday 05 April 2026 00:49:51 +0000 (0:00:03.264) 0:00:19.985 ********** 2026-04-05 00:56:33.389489 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:56:33.389499 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:56:33.389510 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:56:33.389521 | orchestrator | 2026-04-05 00:56:33.389532 | orchestrator | TASK [loadbalancer : Ensuring proxysql service config subdirectories exist] **** 2026-04-05 00:56:33.389542 | orchestrator | Sunday 05 April 2026 00:49:53 +0000 (0:00:01.789) 0:00:21.774 ********** 2026-04-05 00:56:33.389553 | orchestrator | changed: [testbed-node-1] => (item=users) 2026-04-05 00:56:33.389564 | orchestrator | changed: [testbed-node-2] => (item=users) 2026-04-05 00:56:33.389575 | orchestrator | changed: [testbed-node-0] => (item=users) 2026-04-05 00:56:33.389585 | orchestrator | changed: [testbed-node-1] => (item=rules) 2026-04-05 00:56:33.389596 | orchestrator | changed: [testbed-node-0] => (item=rules) 2026-04-05 00:56:33.389630 | orchestrator | changed: [testbed-node-2] => (item=rules) 2026-04-05 00:56:33.389642 | orchestrator | 2026-04-05 00:56:33.389654 | orchestrator | TASK [loadbalancer : Ensuring keepalived checks subdir exists] ***************** 2026-04-05 00:56:33.389664 | orchestrator | Sunday 05 April 2026 00:49:55 +0000 (0:00:02.481) 0:00:24.256 ********** 2026-04-05 00:56:33.389675 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:56:33.389686 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:56:33.389696 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:56:33.389707 | orchestrator | 2026-04-05 00:56:33.389718 | orchestrator | TASK [loadbalancer : Remove mariadb.cfg if proxysql enabled] ******************* 2026-04-05 00:56:33.389729 | orchestrator | Sunday 05 April 2026 00:49:57 +0000 (0:00:01.500) 0:00:25.756 ********** 2026-04-05 00:56:33.389739 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:56:33.389750 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:56:33.389761 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:56:33.389772 | orchestrator | 2026-04-05 00:56:33.389896 | orchestrator | TASK [loadbalancer : Removing checks for services which are disabled] ********** 2026-04-05 00:56:33.389908 | orchestrator | Sunday 05 April 2026 00:49:59 +0000 (0:00:02.844) 0:00:28.600 ********** 2026-04-05 00:56:33.389928 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-04-05 00:56:33.389949 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-05 00:56:33.389961 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-05 00:56:33.389978 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__c566962c1a7f6592ab2b46b7d597581d9ec0d170', '__omit_place_holder__c566962c1a7f6592ab2b46b7d597581d9ec0d170'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-04-05 00:56:33.389990 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:56:33.390003 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-04-05 00:56:33.390126 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-05 00:56:33.390140 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-05 00:56:33.390160 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__c566962c1a7f6592ab2b46b7d597581d9ec0d170', '__omit_place_holder__c566962c1a7f6592ab2b46b7d597581d9ec0d170'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-04-05 00:56:33.390172 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:56:33.390303 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-04-05 00:56:33.390324 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-05 00:56:33.390399 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-05 00:56:33.390411 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__c566962c1a7f6592ab2b46b7d597581d9ec0d170', '__omit_place_holder__c566962c1a7f6592ab2b46b7d597581d9ec0d170'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-04-05 00:56:33.390422 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:56:33.390431 | orchestrator | 2026-04-05 00:56:33.390441 | orchestrator | TASK [loadbalancer : Copying checks for services which are enabled] ************ 2026-04-05 00:56:33.390451 | orchestrator | Sunday 05 April 2026 00:50:01 +0000 (0:00:01.495) 0:00:30.096 ********** 2026-04-05 00:56:33.390469 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-04-05 00:56:33.390480 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-04-05 00:56:33.390506 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-04-05 00:56:33.390517 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-05 00:56:33.390532 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-05 00:56:33.390543 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__c566962c1a7f6592ab2b46b7d597581d9ec0d170', '__omit_place_holder__c566962c1a7f6592ab2b46b7d597581d9ec0d170'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-04-05 00:56:33.390553 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-05 00:56:33.390570 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-05 00:56:33.390580 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__c566962c1a7f6592ab2b46b7d597581d9ec0d170', '__omit_place_holder__c566962c1a7f6592ab2b46b7d597581d9ec0d170'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-04-05 00:56:33.390623 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-05 00:56:33.390634 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-05 00:56:33.390649 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__c566962c1a7f6592ab2b46b7d597581d9ec0d170', '__omit_place_holder__c566962c1a7f6592ab2b46b7d597581d9ec0d170'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-04-05 00:56:33.390660 | orchestrator | 2026-04-05 00:56:33.390670 | orchestrator | TASK [loadbalancer : Copying over config.json files for services] ************** 2026-04-05 00:56:33.390680 | orchestrator | Sunday 05 April 2026 00:50:05 +0000 (0:00:04.457) 0:00:34.553 ********** 2026-04-05 00:56:33.390690 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-04-05 00:56:33.390706 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-04-05 00:56:33.390716 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-04-05 00:56:33.390732 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-05 00:56:33.390744 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-05 00:56:33.390758 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-05 00:56:33.390769 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-04-05 00:56:33.390785 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-04-05 00:56:33.390795 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-04-05 00:56:33.390805 | orchestrator | 2026-04-05 00:56:33.390815 | orchestrator | TASK [loadbalancer : Copying over haproxy.cfg] ********************************* 2026-04-05 00:56:33.390824 | orchestrator | Sunday 05 April 2026 00:50:09 +0000 (0:00:03.836) 0:00:38.390 ********** 2026-04-05 00:56:33.390856 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2026-04-05 00:56:33.390868 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2026-04-05 00:56:33.390878 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2026-04-05 00:56:33.390887 | orchestrator | 2026-04-05 00:56:33.390897 | orchestrator | TASK [loadbalancer : Copying over proxysql config] ***************************** 2026-04-05 00:56:33.390906 | orchestrator | Sunday 05 April 2026 00:50:12 +0000 (0:00:02.663) 0:00:41.053 ********** 2026-04-05 00:56:33.390916 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2026-04-05 00:56:33.390925 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2026-04-05 00:56:33.390935 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2026-04-05 00:56:33.390945 | orchestrator | 2026-04-05 00:56:33.391809 | orchestrator | TASK [loadbalancer : Copying over haproxy single external frontend config] ***** 2026-04-05 00:56:33.391879 | orchestrator | Sunday 05 April 2026 00:50:17 +0000 (0:00:04.776) 0:00:45.830 ********** 2026-04-05 00:56:33.391894 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:56:33.391906 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:56:33.391917 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:56:33.391928 | orchestrator | 2026-04-05 00:56:33.391940 | orchestrator | TASK [loadbalancer : Copying over custom haproxy services configuration] ******* 2026-04-05 00:56:33.391951 | orchestrator | Sunday 05 April 2026 00:50:19 +0000 (0:00:02.737) 0:00:48.568 ********** 2026-04-05 00:56:33.391963 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2026-04-05 00:56:33.391975 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2026-04-05 00:56:33.391985 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2026-04-05 00:56:33.392001 | orchestrator | 2026-04-05 00:56:33.392022 | orchestrator | TASK [loadbalancer : Copying over keepalived.conf] ***************************** 2026-04-05 00:56:33.392042 | orchestrator | Sunday 05 April 2026 00:50:22 +0000 (0:00:02.753) 0:00:51.322 ********** 2026-04-05 00:56:33.392095 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2026-04-05 00:56:33.392113 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2026-04-05 00:56:33.392124 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2026-04-05 00:56:33.392135 | orchestrator | 2026-04-05 00:56:33.392146 | orchestrator | TASK [loadbalancer : Copying over haproxy.pem] ********************************* 2026-04-05 00:56:33.392157 | orchestrator | Sunday 05 April 2026 00:50:25 +0000 (0:00:02.748) 0:00:54.070 ********** 2026-04-05 00:56:33.392168 | orchestrator | changed: [testbed-node-0] => (item=haproxy.pem) 2026-04-05 00:56:33.392179 | orchestrator | changed: [testbed-node-1] => (item=haproxy.pem) 2026-04-05 00:56:33.392215 | orchestrator | changed: [testbed-node-2] => (item=haproxy.pem) 2026-04-05 00:56:33.392226 | orchestrator | 2026-04-05 00:56:33.392237 | orchestrator | TASK [loadbalancer : Copying over haproxy-internal.pem] ************************ 2026-04-05 00:56:33.392248 | orchestrator | Sunday 05 April 2026 00:50:27 +0000 (0:00:01.948) 0:00:56.019 ********** 2026-04-05 00:56:33.392259 | orchestrator | changed: [testbed-node-0] => (item=haproxy-internal.pem) 2026-04-05 00:56:33.392270 | orchestrator | changed: [testbed-node-1] => (item=haproxy-internal.pem) 2026-04-05 00:56:33.392280 | orchestrator | changed: [testbed-node-2] => (item=haproxy-internal.pem) 2026-04-05 00:56:33.392292 | orchestrator | 2026-04-05 00:56:33.392303 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2026-04-05 00:56:33.392314 | orchestrator | Sunday 05 April 2026 00:50:29 +0000 (0:00:02.240) 0:00:58.260 ********** 2026-04-05 00:56:33.392324 | orchestrator | included: /ansible/roles/loadbalancer/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 00:56:33.392336 | orchestrator | 2026-04-05 00:56:33.392347 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over extra CA certificates] *** 2026-04-05 00:56:33.392358 | orchestrator | Sunday 05 April 2026 00:50:30 +0000 (0:00:00.976) 0:00:59.236 ********** 2026-04-05 00:56:33.392375 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-04-05 00:56:33.392391 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-04-05 00:56:33.392427 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-04-05 00:56:33.392450 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-05 00:56:33.392470 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-05 00:56:33.392485 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-05 00:56:33.392498 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-04-05 00:56:33.392509 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-04-05 00:56:33.392521 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-04-05 00:56:33.392532 | orchestrator | 2026-04-05 00:56:33.392544 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over backend internal TLS certificate] *** 2026-04-05 00:56:33.392555 | orchestrator | Sunday 05 April 2026 00:50:34 +0000 (0:00:04.150) 0:01:03.386 ********** 2026-04-05 00:56:33.392576 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-04-05 00:56:33.392595 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-05 00:56:33.392612 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-05 00:56:33.392624 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:56:33.392635 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-04-05 00:56:33.392647 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-05 00:56:33.392658 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-05 00:56:33.392671 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:56:33.392682 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-04-05 00:56:33.392707 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-05 00:56:33.392723 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-05 00:56:33.392735 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:56:33.392746 | orchestrator | 2026-04-05 00:56:33.392757 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over backend internal TLS key] *** 2026-04-05 00:56:33.392768 | orchestrator | Sunday 05 April 2026 00:50:35 +0000 (0:00:00.853) 0:01:04.239 ********** 2026-04-05 00:56:33.392779 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-04-05 00:56:33.392791 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-05 00:56:33.392802 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-05 00:56:33.392814 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:56:33.392825 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-04-05 00:56:33.392850 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-05 00:56:33.392863 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-05 00:56:33.392874 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:56:33.392890 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-04-05 00:56:33.392902 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-05 00:56:33.392913 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-05 00:56:33.392925 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:56:33.392936 | orchestrator | 2026-04-05 00:56:33.392947 | orchestrator | TASK [service-cert-copy : mariadb | Copying over extra CA certificates] ******** 2026-04-05 00:56:33.392957 | orchestrator | Sunday 05 April 2026 00:50:38 +0000 (0:00:02.617) 0:01:06.857 ********** 2026-04-05 00:56:33.392969 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-04-05 00:56:33.392993 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-05 00:56:33.393005 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-05 00:56:33.393016 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:56:33.393027 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-04-05 00:56:33.393039 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-05 00:56:33.393050 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-05 00:56:33.393062 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:56:33.393098 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-04-05 00:56:33.393118 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-05 00:56:33.393135 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-05 00:56:33.393147 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:56:33.393158 | orchestrator | 2026-04-05 00:56:33.393169 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS certificate] *** 2026-04-05 00:56:33.393180 | orchestrator | Sunday 05 April 2026 00:50:38 +0000 (0:00:00.775) 0:01:07.633 ********** 2026-04-05 00:56:33.393208 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-04-05 00:56:33.393230 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-05 00:56:33.393242 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-05 00:56:33.393253 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:56:33.393264 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-04-05 00:56:33.393283 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-05 00:56:33.393295 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-05 00:56:33.393306 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:56:33.393323 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-04-05 00:56:33.393335 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-05 00:56:33.393352 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-05 00:56:33.393364 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:56:33.393375 | orchestrator | 2026-04-05 00:56:33.393386 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS key] ***** 2026-04-05 00:56:33.393397 | orchestrator | Sunday 05 April 2026 00:50:39 +0000 (0:00:00.624) 0:01:08.257 ********** 2026-04-05 00:56:33.393408 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-04-05 00:56:33.393425 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-05 00:56:33.393437 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-05 00:56:33.393448 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:56:33.393465 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-04-05 00:56:33.393477 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-05 00:56:33.393493 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-05 00:56:33.393505 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:56:33.393516 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-04-05 00:56:33.393528 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-05 00:56:33.393545 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-05 00:56:33.393557 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:56:33.393567 | orchestrator | 2026-04-05 00:56:33.393578 | orchestrator | TASK [service-cert-copy : proxysql | Copying over extra CA certificates] ******* 2026-04-05 00:56:33.393589 | orchestrator | Sunday 05 April 2026 00:50:40 +0000 (0:00:01.308) 0:01:09.566 ********** 2026-04-05 00:56:33.393600 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-04-05 00:56:33.393617 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-05 00:56:33.393629 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-05 00:56:33.393641 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:56:33.393657 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-04-05 00:56:33.393668 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-05 00:56:33.393686 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-05 00:56:33.393697 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:56:33.393708 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-04-05 00:56:33.393724 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-05 00:56:33.393737 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-05 00:56:33.393748 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:56:33.393759 | orchestrator | 2026-04-05 00:56:33.393770 | orchestrator | TASK [service-cert-copy : proxysql | Copying over backend internal TLS certificate] *** 2026-04-05 00:56:33.393785 | orchestrator | Sunday 05 April 2026 00:50:41 +0000 (0:00:00.704) 0:01:10.271 ********** 2026-04-05 00:56:33.393812 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-04-05 00:56:33.393829 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-05 00:56:33.393872 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-05 00:56:33.393895 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:56:33.393917 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-04-05 00:56:33.393937 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-05 00:56:33.393970 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-05 00:56:33.393991 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:56:33.394012 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-04-05 00:56:33.394103 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-05 00:56:33.394127 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-05 00:56:33.394138 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:56:33.394150 | orchestrator | 2026-04-05 00:56:33.394161 | orchestrator | TASK [service-cert-copy : proxysql | Copying over backend internal TLS key] **** 2026-04-05 00:56:33.394172 | orchestrator | Sunday 05 April 2026 00:50:42 +0000 (0:00:00.650) 0:01:10.921 ********** 2026-04-05 00:56:33.394201 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-04-05 00:56:33.394214 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-05 00:56:33.394226 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-05 00:56:33.394237 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:56:33.394256 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-04-05 00:56:33.394273 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-05 00:56:33.394291 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-05 00:56:33.394302 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:56:33.394314 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-04-05 00:56:33.394326 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-05 00:56:33.394337 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-05 00:56:33.394349 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:56:33.394360 | orchestrator | 2026-04-05 00:56:33.394370 | orchestrator | TASK [loadbalancer : Copying over haproxy start script] ************************ 2026-04-05 00:56:33.394381 | orchestrator | Sunday 05 April 2026 00:50:43 +0000 (0:00:01.458) 0:01:12.379 ********** 2026-04-05 00:56:33.394392 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2026-04-05 00:56:33.394403 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2026-04-05 00:56:33.394420 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2026-04-05 00:56:33.394432 | orchestrator | 2026-04-05 00:56:33.394442 | orchestrator | TASK [loadbalancer : Copying over proxysql start script] *********************** 2026-04-05 00:56:33.394453 | orchestrator | Sunday 05 April 2026 00:50:45 +0000 (0:00:01.627) 0:01:14.007 ********** 2026-04-05 00:56:33.394464 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2026-04-05 00:56:33.394475 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2026-04-05 00:56:33.394492 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2026-04-05 00:56:33.394503 | orchestrator | 2026-04-05 00:56:33.394514 | orchestrator | TASK [loadbalancer : Copying files for haproxy-ssh] **************************** 2026-04-05 00:56:33.394525 | orchestrator | Sunday 05 April 2026 00:50:47 +0000 (0:00:01.851) 0:01:15.859 ********** 2026-04-05 00:56:33.394536 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2026-04-05 00:56:33.394547 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2026-04-05 00:56:33.394557 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2026-04-05 00:56:33.394577 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-04-05 00:56:33.394588 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:56:33.394599 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-04-05 00:56:33.394610 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:56:33.394621 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-04-05 00:56:33.394632 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:56:33.394643 | orchestrator | 2026-04-05 00:56:33.394654 | orchestrator | TASK [loadbalancer : Check loadbalancer containers] **************************** 2026-04-05 00:56:33.394665 | orchestrator | Sunday 05 April 2026 00:50:48 +0000 (0:00:01.270) 0:01:17.129 ********** 2026-04-05 00:56:33.394676 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-04-05 00:56:33.394687 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-04-05 00:56:33.394698 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-04-05 00:56:33.394717 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-05 00:56:33.394735 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-05 00:56:33.394751 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-05 00:56:33.394763 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-04-05 00:56:33.394775 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-04-05 00:56:33.394786 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-04-05 00:56:33.394797 | orchestrator | 2026-04-05 00:56:33.394808 | orchestrator | TASK [include_role : aodh] ***************************************************** 2026-04-05 00:56:33.394819 | orchestrator | Sunday 05 April 2026 00:50:51 +0000 (0:00:02.657) 0:01:19.787 ********** 2026-04-05 00:56:33.394830 | orchestrator | included: aodh for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 00:56:33.394840 | orchestrator | 2026-04-05 00:56:33.394851 | orchestrator | TASK [haproxy-config : Copying over aodh haproxy config] *********************** 2026-04-05 00:56:33.394863 | orchestrator | Sunday 05 April 2026 00:50:52 +0000 (0:00:01.214) 0:01:21.002 ********** 2026-04-05 00:56:33.394874 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-04-05 00:56:33.394900 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-04-05 00:56:33.394917 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-04-05 00:56:33.394929 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-04-05 00:56:33.394941 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-04-05 00:56:33.394952 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-04-05 00:56:33.394963 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-04-05 00:56:33.394987 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-04-05 00:56:33.394999 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-04-05 00:56:33.395014 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-04-05 00:56:33.395026 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-04-05 00:56:33.395037 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-04-05 00:56:33.395048 | orchestrator | 2026-04-05 00:56:33.395059 | orchestrator | TASK [haproxy-config : Add configuration for aodh when using single external frontend] *** 2026-04-05 00:56:33.395070 | orchestrator | Sunday 05 April 2026 00:50:58 +0000 (0:00:05.708) 0:01:26.710 ********** 2026-04-05 00:56:33.395081 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-04-05 00:56:33.395104 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-04-05 00:56:33.395116 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-04-05 00:56:33.395132 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-04-05 00:56:33.395144 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:56:33.395155 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-04-05 00:56:33.395167 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-04-05 00:56:33.395209 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-04-05 00:56:33.395228 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-04-05 00:56:33.395240 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-04-05 00:56:33.395256 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-04-05 00:56:33.395268 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-04-05 00:56:33.395279 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:56:33.395290 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-04-05 00:56:33.395301 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:56:33.395312 | orchestrator | 2026-04-05 00:56:33.395323 | orchestrator | TASK [haproxy-config : Configuring firewall for aodh] ************************** 2026-04-05 00:56:33.395340 | orchestrator | Sunday 05 April 2026 00:50:59 +0000 (0:00:01.048) 0:01:27.758 ********** 2026-04-05 00:56:33.395352 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2026-04-05 00:56:33.395363 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2026-04-05 00:56:33.395375 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:56:33.395386 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2026-04-05 00:56:33.395397 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2026-04-05 00:56:33.395408 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:56:33.395419 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2026-04-05 00:56:33.395430 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2026-04-05 00:56:33.395441 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:56:33.395453 | orchestrator | 2026-04-05 00:56:33.395476 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL users config] *************** 2026-04-05 00:56:33.395488 | orchestrator | Sunday 05 April 2026 00:51:00 +0000 (0:00:01.829) 0:01:29.588 ********** 2026-04-05 00:56:33.395498 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:56:33.395509 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:56:33.395520 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:56:33.395531 | orchestrator | 2026-04-05 00:56:33.395541 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL rules config] *************** 2026-04-05 00:56:33.395552 | orchestrator | Sunday 05 April 2026 00:51:03 +0000 (0:00:02.133) 0:01:31.721 ********** 2026-04-05 00:56:33.395563 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:56:33.395573 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:56:33.395584 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:56:33.395595 | orchestrator | 2026-04-05 00:56:33.395606 | orchestrator | TASK [include_role : barbican] ************************************************* 2026-04-05 00:56:33.395616 | orchestrator | Sunday 05 April 2026 00:51:05 +0000 (0:00:02.153) 0:01:33.875 ********** 2026-04-05 00:56:33.395627 | orchestrator | included: barbican for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 00:56:33.395638 | orchestrator | 2026-04-05 00:56:33.395648 | orchestrator | TASK [haproxy-config : Copying over barbican haproxy config] ******************* 2026-04-05 00:56:33.395659 | orchestrator | Sunday 05 April 2026 00:51:07 +0000 (0:00:02.329) 0:01:36.205 ********** 2026-04-05 00:56:33.395675 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-04-05 00:56:33.395694 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-04-05 00:56:33.395706 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-04-05 00:56:33.395718 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-04-05 00:56:33.395736 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-04-05 00:56:33.395752 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-04-05 00:56:33.395764 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-04-05 00:56:33.395786 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-04-05 00:56:33.395797 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-04-05 00:56:33.395809 | orchestrator | 2026-04-05 00:56:33.395820 | orchestrator | TASK [haproxy-config : Add configuration for barbican when using single external frontend] *** 2026-04-05 00:56:33.395831 | orchestrator | Sunday 05 April 2026 00:51:12 +0000 (0:00:04.985) 0:01:41.190 ********** 2026-04-05 00:56:33.395848 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-04-05 00:56:33.395860 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-04-05 00:56:33.395875 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-04-05 00:56:33.395893 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:56:33.395905 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-04-05 00:56:33.395916 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-04-05 00:56:33.395927 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-04-05 00:56:33.395938 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:56:33.395955 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-04-05 00:56:33.395971 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-04-05 00:56:33.395989 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-04-05 00:56:33.396000 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:56:33.396011 | orchestrator | 2026-04-05 00:56:33.396023 | orchestrator | TASK [haproxy-config : Configuring firewall for barbican] ********************** 2026-04-05 00:56:33.396034 | orchestrator | Sunday 05 April 2026 00:51:13 +0000 (0:00:01.237) 0:01:42.428 ********** 2026-04-05 00:56:33.396044 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-04-05 00:56:33.396056 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-04-05 00:56:33.396067 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:56:33.396078 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-04-05 00:56:33.396089 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-04-05 00:56:33.396100 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:56:33.396111 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-04-05 00:56:33.396122 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-04-05 00:56:33.396133 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:56:33.396144 | orchestrator | 2026-04-05 00:56:33.396155 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL users config] *********** 2026-04-05 00:56:33.396166 | orchestrator | Sunday 05 April 2026 00:51:14 +0000 (0:00:01.187) 0:01:43.615 ********** 2026-04-05 00:56:33.396176 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:56:33.396233 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:56:33.396246 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:56:33.396257 | orchestrator | 2026-04-05 00:56:33.396267 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL rules config] *********** 2026-04-05 00:56:33.396278 | orchestrator | Sunday 05 April 2026 00:51:16 +0000 (0:00:01.422) 0:01:45.038 ********** 2026-04-05 00:56:33.396289 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:56:33.396300 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:56:33.396310 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:56:33.396321 | orchestrator | 2026-04-05 00:56:33.396338 | orchestrator | TASK [include_role : blazar] *************************************************** 2026-04-05 00:56:33.396349 | orchestrator | Sunday 05 April 2026 00:51:18 +0000 (0:00:02.094) 0:01:47.132 ********** 2026-04-05 00:56:33.396360 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:56:33.396371 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:56:33.396389 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:56:33.396400 | orchestrator | 2026-04-05 00:56:33.396411 | orchestrator | TASK [include_role : ceph-rgw] ************************************************* 2026-04-05 00:56:33.396422 | orchestrator | Sunday 05 April 2026 00:51:18 +0000 (0:00:00.322) 0:01:47.454 ********** 2026-04-05 00:56:33.396433 | orchestrator | included: ceph-rgw for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 00:56:33.396443 | orchestrator | 2026-04-05 00:56:33.396454 | orchestrator | TASK [haproxy-config : Copying over ceph-rgw haproxy config] ******************* 2026-04-05 00:56:33.396465 | orchestrator | Sunday 05 April 2026 00:51:19 +0000 (0:00:01.000) 0:01:48.455 ********** 2026-04-05 00:56:33.396481 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}}}}) 2026-04-05 00:56:33.396494 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}}}}) 2026-04-05 00:56:33.396506 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}}}}) 2026-04-05 00:56:33.396517 | orchestrator | 2026-04-05 00:56:33.396528 | orchestrator | TASK [haproxy-config : Add configuration for ceph-rgw when using single external frontend] *** 2026-04-05 00:56:33.396539 | orchestrator | Sunday 05 April 2026 00:51:24 +0000 (0:00:05.025) 0:01:53.481 ********** 2026-04-05 00:56:33.396556 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}}}})  2026-04-05 00:56:33.396575 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:56:33.396587 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}}}})  2026-04-05 00:56:33.396598 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:56:33.396615 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}}}})  2026-04-05 00:56:33.396626 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:56:33.396637 | orchestrator | 2026-04-05 00:56:33.396648 | orchestrator | TASK [haproxy-config : Configuring firewall for ceph-rgw] ********************** 2026-04-05 00:56:33.396659 | orchestrator | Sunday 05 April 2026 00:51:29 +0000 (0:00:04.820) 0:01:58.302 ********** 2026-04-05 00:56:33.396671 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}})  2026-04-05 00:56:33.396684 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}})  2026-04-05 00:56:33.396696 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:56:33.396708 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}})  2026-04-05 00:56:33.396720 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}})  2026-04-05 00:56:33.396737 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:56:33.396754 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}})  2026-04-05 00:56:33.396766 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}})  2026-04-05 00:56:33.396777 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:56:33.396786 | orchestrator | 2026-04-05 00:56:33.396796 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL users config] *********** 2026-04-05 00:56:33.396805 | orchestrator | Sunday 05 April 2026 00:51:33 +0000 (0:00:03.895) 0:02:02.197 ********** 2026-04-05 00:56:33.396816 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:56:33.396825 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:56:33.396835 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:56:33.396844 | orchestrator | 2026-04-05 00:56:33.396854 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL rules config] *********** 2026-04-05 00:56:33.396863 | orchestrator | Sunday 05 April 2026 00:51:33 +0000 (0:00:00.471) 0:02:02.669 ********** 2026-04-05 00:56:33.396873 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:56:33.396882 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:56:33.396892 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:56:33.396901 | orchestrator | 2026-04-05 00:56:33.396928 | orchestrator | TASK [include_role : cinder] *************************************************** 2026-04-05 00:56:33.396938 | orchestrator | Sunday 05 April 2026 00:51:35 +0000 (0:00:01.923) 0:02:04.592 ********** 2026-04-05 00:56:33.396948 | orchestrator | included: cinder for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 00:56:33.396957 | orchestrator | 2026-04-05 00:56:33.396967 | orchestrator | TASK [haproxy-config : Copying over cinder haproxy config] ********************* 2026-04-05 00:56:33.396977 | orchestrator | Sunday 05 April 2026 00:51:37 +0000 (0:00:01.673) 0:02:06.266 ********** 2026-04-05 00:56:33.396987 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-04-05 00:56:33.396997 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-05 00:56:33.397013 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-05 00:56:33.397030 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-05 00:56:33.397045 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-04-05 00:56:33.397056 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-04-05 00:56:33.397067 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-05 00:56:33.397083 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-05 00:56:33.397094 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-05 00:56:33.397110 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-05 00:56:33.397124 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-05 00:56:33.397135 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-05 00:56:33.397145 | orchestrator | 2026-04-05 00:56:33.397155 | orchestrator | TASK [haproxy-config : Add configuration for cinder when using single external frontend] *** 2026-04-05 00:56:33.397165 | orchestrator | Sunday 05 April 2026 00:51:42 +0000 (0:00:04.521) 0:02:10.787 ********** 2026-04-05 00:56:33.397175 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-04-05 00:56:33.397208 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-05 00:56:33.397224 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-05 00:56:33.397235 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-05 00:56:33.397244 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:56:33.397258 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-04-05 00:56:33.397269 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-05 00:56:33.397285 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-04-05 00:56:33.397299 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-05 00:56:33.397310 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-05 00:56:33.397324 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-05 00:56:33.397334 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-05 00:56:33.397349 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:56:33.397360 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-05 00:56:33.397370 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:56:33.397379 | orchestrator | 2026-04-05 00:56:33.397389 | orchestrator | TASK [haproxy-config : Configuring firewall for cinder] ************************ 2026-04-05 00:56:33.397399 | orchestrator | Sunday 05 April 2026 00:51:43 +0000 (0:00:01.206) 0:02:11.994 ********** 2026-04-05 00:56:33.397409 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-04-05 00:56:33.397419 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-04-05 00:56:33.397429 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:56:33.397439 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-04-05 00:56:33.397449 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-04-05 00:56:33.397459 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:56:33.397473 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-04-05 00:56:33.397483 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-04-05 00:56:33.397493 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:56:33.397503 | orchestrator | 2026-04-05 00:56:33.397512 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL users config] ************* 2026-04-05 00:56:33.397522 | orchestrator | Sunday 05 April 2026 00:51:44 +0000 (0:00:01.639) 0:02:13.633 ********** 2026-04-05 00:56:33.397532 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:56:33.397541 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:56:33.397550 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:56:33.397560 | orchestrator | 2026-04-05 00:56:33.397570 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL rules config] ************* 2026-04-05 00:56:33.397580 | orchestrator | Sunday 05 April 2026 00:51:46 +0000 (0:00:01.508) 0:02:15.141 ********** 2026-04-05 00:56:33.397589 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:56:33.397599 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:56:33.397608 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:56:33.397618 | orchestrator | 2026-04-05 00:56:33.397627 | orchestrator | TASK [include_role : cloudkitty] *********************************************** 2026-04-05 00:56:33.397641 | orchestrator | Sunday 05 April 2026 00:51:48 +0000 (0:00:02.419) 0:02:17.561 ********** 2026-04-05 00:56:33.397651 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:56:33.397660 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:56:33.397675 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:56:33.397685 | orchestrator | 2026-04-05 00:56:33.397695 | orchestrator | TASK [include_role : cyborg] *************************************************** 2026-04-05 00:56:33.397704 | orchestrator | Sunday 05 April 2026 00:51:49 +0000 (0:00:00.450) 0:02:18.011 ********** 2026-04-05 00:56:33.397714 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:56:33.397723 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:56:33.397732 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:56:33.397742 | orchestrator | 2026-04-05 00:56:33.397751 | orchestrator | TASK [include_role : designate] ************************************************ 2026-04-05 00:56:33.397761 | orchestrator | Sunday 05 April 2026 00:51:49 +0000 (0:00:00.288) 0:02:18.300 ********** 2026-04-05 00:56:33.397771 | orchestrator | included: designate for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 00:56:33.397780 | orchestrator | 2026-04-05 00:56:33.397790 | orchestrator | TASK [haproxy-config : Copying over designate haproxy config] ****************** 2026-04-05 00:56:33.397799 | orchestrator | Sunday 05 April 2026 00:51:50 +0000 (0:00:00.992) 0:02:19.292 ********** 2026-04-05 00:56:33.397810 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-04-05 00:56:33.397820 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-04-05 00:56:33.397831 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-05 00:56:33.397846 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-05 00:56:33.397857 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-05 00:56:33.397876 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-04-05 00:56:33.397886 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-04-05 00:56:33.397897 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-04-05 00:56:33.397907 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-04-05 00:56:33.397922 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-05 00:56:33.397932 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-05 00:56:33.397952 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-05 00:56:33.397963 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-04-05 00:56:33.397974 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-04-05 00:56:33.397985 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-04-05 00:56:33.398000 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-04-05 00:56:33.398010 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-05 00:56:33.398084 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-05 00:56:33.398105 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-05 00:56:33.398122 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-04-05 00:56:33.398140 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-04-05 00:56:33.398158 | orchestrator | 2026-04-05 00:56:33.398177 | orchestrator | TASK [haproxy-config : Add configuration for designate when using single external frontend] *** 2026-04-05 00:56:33.398213 | orchestrator | Sunday 05 April 2026 00:51:55 +0000 (0:00:05.267) 0:02:24.560 ********** 2026-04-05 00:56:33.398234 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-04-05 00:56:33.398262 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-04-05 00:56:33.398288 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-05 00:56:33.398299 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-05 00:56:33.398309 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-05 00:56:33.398320 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-04-05 00:56:33.398330 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-04-05 00:56:33.398340 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:56:33.398357 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-04-05 00:56:33.398377 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-04-05 00:56:33.398391 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-05 00:56:33.398402 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-04-05 00:56:33.398411 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-05 00:56:33.398421 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-04-05 00:56:33.398442 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-05 00:56:33.398452 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-05 00:56:33.398467 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-04-05 00:56:33.398477 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-05 00:56:33.398487 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-04-05 00:56:33.398497 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:56:33.398507 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-05 00:56:33.398517 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-04-05 00:56:33.398541 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-04-05 00:56:33.398551 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:56:33.398561 | orchestrator | 2026-04-05 00:56:33.398571 | orchestrator | TASK [haproxy-config : Configuring firewall for designate] ********************* 2026-04-05 00:56:33.398581 | orchestrator | Sunday 05 April 2026 00:51:56 +0000 (0:00:00.923) 0:02:25.483 ********** 2026-04-05 00:56:33.398590 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2026-04-05 00:56:33.398600 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2026-04-05 00:56:33.398610 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:56:33.398624 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2026-04-05 00:56:33.398635 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2026-04-05 00:56:33.398645 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:56:33.398654 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2026-04-05 00:56:33.398664 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2026-04-05 00:56:33.398674 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:56:33.398683 | orchestrator | 2026-04-05 00:56:33.398693 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL users config] ********** 2026-04-05 00:56:33.398703 | orchestrator | Sunday 05 April 2026 00:51:58 +0000 (0:00:01.560) 0:02:27.044 ********** 2026-04-05 00:56:33.398713 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:56:33.398722 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:56:33.398732 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:56:33.398742 | orchestrator | 2026-04-05 00:56:33.398751 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL rules config] ********** 2026-04-05 00:56:33.398761 | orchestrator | Sunday 05 April 2026 00:51:59 +0000 (0:00:01.511) 0:02:28.555 ********** 2026-04-05 00:56:33.398771 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:56:33.398780 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:56:33.398789 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:56:33.398799 | orchestrator | 2026-04-05 00:56:33.398808 | orchestrator | TASK [include_role : etcd] ***************************************************** 2026-04-05 00:56:33.398818 | orchestrator | Sunday 05 April 2026 00:52:02 +0000 (0:00:02.276) 0:02:30.831 ********** 2026-04-05 00:56:33.398828 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:56:33.398837 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:56:33.398852 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:56:33.398862 | orchestrator | 2026-04-05 00:56:33.398872 | orchestrator | TASK [include_role : glance] *************************************************** 2026-04-05 00:56:33.398881 | orchestrator | Sunday 05 April 2026 00:52:02 +0000 (0:00:00.333) 0:02:31.164 ********** 2026-04-05 00:56:33.398891 | orchestrator | included: glance for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 00:56:33.398901 | orchestrator | 2026-04-05 00:56:33.398911 | orchestrator | TASK [haproxy-config : Copying over glance haproxy config] ********************* 2026-04-05 00:56:33.398920 | orchestrator | Sunday 05 April 2026 00:52:03 +0000 (0:00:01.085) 0:02:32.249 ********** 2026-04-05 00:56:33.398939 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-04-05 00:56:33.398957 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-04-05 00:56:33.398979 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-04-05 00:56:33.398996 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-04-05 00:56:33.399008 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-04-05 00:56:33.399036 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-04-05 00:56:33.399048 | orchestrator | 2026-04-05 00:56:33.399058 | orchestrator | TASK [haproxy-config : Add configuration for glance when using single external frontend] *** 2026-04-05 00:56:33.399068 | orchestrator | Sunday 05 April 2026 00:52:08 +0000 (0:00:04.646) 0:02:36.896 ********** 2026-04-05 00:56:33.399078 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-04-05 00:56:33.399106 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-04-05 00:56:33.399118 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:56:33.399129 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-04-05 00:56:33.399152 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-04-05 00:56:33.399164 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:56:33.399181 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-04-05 00:56:33.399287 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-04-05 00:56:33.399307 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:56:33.399324 | orchestrator | 2026-04-05 00:56:33.399340 | orchestrator | TASK [haproxy-config : Configuring firewall for glance] ************************ 2026-04-05 00:56:33.399357 | orchestrator | Sunday 05 April 2026 00:52:11 +0000 (0:00:03.363) 0:02:40.259 ********** 2026-04-05 00:56:33.399374 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-04-05 00:56:33.399400 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-04-05 00:56:33.399417 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:56:33.399432 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-04-05 00:56:33.399450 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-04-05 00:56:33.399461 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:56:33.399471 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-04-05 00:56:33.399481 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-04-05 00:56:33.399491 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:56:33.399501 | orchestrator | 2026-04-05 00:56:33.399510 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL users config] ************* 2026-04-05 00:56:33.399520 | orchestrator | Sunday 05 April 2026 00:52:15 +0000 (0:00:03.886) 0:02:44.145 ********** 2026-04-05 00:56:33.399530 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:56:33.399539 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:56:33.399548 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:56:33.399558 | orchestrator | 2026-04-05 00:56:33.399567 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL rules config] ************* 2026-04-05 00:56:33.399577 | orchestrator | Sunday 05 April 2026 00:52:16 +0000 (0:00:01.458) 0:02:45.604 ********** 2026-04-05 00:56:33.399587 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:56:33.399596 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:56:33.399612 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:56:33.399623 | orchestrator | 2026-04-05 00:56:33.399632 | orchestrator | TASK [include_role : gnocchi] ************************************************** 2026-04-05 00:56:33.399642 | orchestrator | Sunday 05 April 2026 00:52:19 +0000 (0:00:02.081) 0:02:47.686 ********** 2026-04-05 00:56:33.399651 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:56:33.399661 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:56:33.399670 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:56:33.399680 | orchestrator | 2026-04-05 00:56:33.399689 | orchestrator | TASK [include_role : grafana] ************************************************** 2026-04-05 00:56:33.399699 | orchestrator | Sunday 05 April 2026 00:52:19 +0000 (0:00:00.319) 0:02:48.005 ********** 2026-04-05 00:56:33.399709 | orchestrator | included: grafana for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 00:56:33.399718 | orchestrator | 2026-04-05 00:56:33.399727 | orchestrator | TASK [haproxy-config : Copying over grafana haproxy config] ******************** 2026-04-05 00:56:33.399737 | orchestrator | Sunday 05 April 2026 00:52:20 +0000 (0:00:01.144) 0:02:49.150 ********** 2026-04-05 00:56:33.399751 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-04-05 00:56:33.399765 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-04-05 00:56:33.399774 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-04-05 00:56:33.399782 | orchestrator | 2026-04-05 00:56:33.399790 | orchestrator | TASK [haproxy-config : Add configuration for grafana when using single external frontend] *** 2026-04-05 00:56:33.399798 | orchestrator | Sunday 05 April 2026 00:52:23 +0000 (0:00:03.367) 0:02:52.518 ********** 2026-04-05 00:56:33.399806 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-04-05 00:56:33.399818 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-04-05 00:56:33.399827 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:56:33.399835 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:56:33.399843 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-04-05 00:56:33.399856 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:56:33.399864 | orchestrator | 2026-04-05 00:56:33.399871 | orchestrator | TASK [haproxy-config : Configuring firewall for grafana] *********************** 2026-04-05 00:56:33.399879 | orchestrator | Sunday 05 April 2026 00:52:24 +0000 (0:00:00.431) 0:02:52.950 ********** 2026-04-05 00:56:33.399890 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2026-04-05 00:56:33.399899 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2026-04-05 00:56:33.399907 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2026-04-05 00:56:33.399915 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2026-04-05 00:56:33.399922 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:56:33.399930 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:56:33.399938 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2026-04-05 00:56:33.399946 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2026-04-05 00:56:33.399954 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:56:33.399962 | orchestrator | 2026-04-05 00:56:33.399969 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL users config] ************ 2026-04-05 00:56:33.399977 | orchestrator | Sunday 05 April 2026 00:52:25 +0000 (0:00:00.946) 0:02:53.896 ********** 2026-04-05 00:56:33.399985 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:56:33.399993 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:56:33.400001 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:56:33.400009 | orchestrator | 2026-04-05 00:56:33.400016 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL rules config] ************ 2026-04-05 00:56:33.400024 | orchestrator | Sunday 05 April 2026 00:52:26 +0000 (0:00:01.437) 0:02:55.334 ********** 2026-04-05 00:56:33.400032 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:56:33.400040 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:56:33.400048 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:56:33.400055 | orchestrator | 2026-04-05 00:56:33.400063 | orchestrator | TASK [include_role : heat] ***************************************************** 2026-04-05 00:56:33.400074 | orchestrator | Sunday 05 April 2026 00:52:28 +0000 (0:00:02.212) 0:02:57.546 ********** 2026-04-05 00:56:33.400093 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:56:33.400110 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:56:33.400123 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:56:33.400137 | orchestrator | 2026-04-05 00:56:33.400149 | orchestrator | TASK [include_role : horizon] ************************************************** 2026-04-05 00:56:33.400162 | orchestrator | Sunday 05 April 2026 00:52:29 +0000 (0:00:00.369) 0:02:57.916 ********** 2026-04-05 00:56:33.400175 | orchestrator | included: horizon for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 00:56:33.400206 | orchestrator | 2026-04-05 00:56:33.400221 | orchestrator | TASK [haproxy-config : Copying over horizon haproxy config] ******************** 2026-04-05 00:56:33.400245 | orchestrator | Sunday 05 April 2026 00:52:30 +0000 (0:00:01.329) 0:02:59.246 ********** 2026-04-05 00:56:33.400290 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-04-05 00:56:33.400303 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-04-05 00:56:33.400339 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-04-05 00:56:33.400350 | orchestrator | 2026-04-05 00:56:33.400358 | orchestrator | TASK [haproxy-config : Add configuration for horizon when using single external frontend] *** 2026-04-05 00:56:33.400366 | orchestrator | Sunday 05 April 2026 00:52:34 +0000 (0:00:03.619) 0:03:02.866 ********** 2026-04-05 00:56:33.400379 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-04-05 00:56:33.400393 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:56:33.400406 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-04-05 00:56:33.400415 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:56:33.400428 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-04-05 00:56:33.400442 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:56:33.400450 | orchestrator | 2026-04-05 00:56:33.400458 | orchestrator | TASK [haproxy-config : Configuring firewall for horizon] *********************** 2026-04-05 00:56:33.400466 | orchestrator | Sunday 05 April 2026 00:52:34 +0000 (0:00:00.691) 0:03:03.557 ********** 2026-04-05 00:56:33.400474 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-04-05 00:56:33.400497 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-04-05 00:56:33.400513 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-04-05 00:56:33.400527 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-04-05 00:56:33.400541 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2026-04-05 00:56:33.400556 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:56:33.400569 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-04-05 00:56:33.400584 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-04-05 00:56:33.400607 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-04-05 00:56:33.400622 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-04-05 00:56:33.400636 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-04-05 00:56:33.400645 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2026-04-05 00:56:33.400653 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:56:33.400667 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-04-05 00:56:33.400675 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-04-05 00:56:33.400683 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-04-05 00:56:33.400695 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2026-04-05 00:56:33.400704 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:56:33.400712 | orchestrator | 2026-04-05 00:56:33.400720 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL users config] ************ 2026-04-05 00:56:33.400728 | orchestrator | Sunday 05 April 2026 00:52:35 +0000 (0:00:00.992) 0:03:04.549 ********** 2026-04-05 00:56:33.400736 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:56:33.400743 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:56:33.400751 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:56:33.400759 | orchestrator | 2026-04-05 00:56:33.400767 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL rules config] ************ 2026-04-05 00:56:33.400774 | orchestrator | Sunday 05 April 2026 00:52:37 +0000 (0:00:01.775) 0:03:06.325 ********** 2026-04-05 00:56:33.400782 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:56:33.400790 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:56:33.400798 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:56:33.400806 | orchestrator | 2026-04-05 00:56:33.400813 | orchestrator | TASK [include_role : influxdb] ************************************************* 2026-04-05 00:56:33.400821 | orchestrator | Sunday 05 April 2026 00:52:39 +0000 (0:00:02.046) 0:03:08.371 ********** 2026-04-05 00:56:33.400829 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:56:33.400837 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:56:33.400844 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:56:33.400857 | orchestrator | 2026-04-05 00:56:33.400865 | orchestrator | TASK [include_role : ironic] *************************************************** 2026-04-05 00:56:33.400873 | orchestrator | Sunday 05 April 2026 00:52:40 +0000 (0:00:00.321) 0:03:08.693 ********** 2026-04-05 00:56:33.400880 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:56:33.400888 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:56:33.400896 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:56:33.400903 | orchestrator | 2026-04-05 00:56:33.400911 | orchestrator | TASK [include_role : keystone] ************************************************* 2026-04-05 00:56:33.400919 | orchestrator | Sunday 05 April 2026 00:52:40 +0000 (0:00:00.304) 0:03:08.997 ********** 2026-04-05 00:56:33.400927 | orchestrator | included: keystone for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 00:56:33.400934 | orchestrator | 2026-04-05 00:56:33.400942 | orchestrator | TASK [haproxy-config : Copying over keystone haproxy config] ******************* 2026-04-05 00:56:33.400950 | orchestrator | Sunday 05 April 2026 00:52:41 +0000 (0:00:01.256) 0:03:10.254 ********** 2026-04-05 00:56:33.400958 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-04-05 00:56:33.400974 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-04-05 00:56:33.400987 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-05 00:56:33.400996 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-05 00:56:33.401009 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-05 00:56:33.401018 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-05 00:56:33.401027 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-04-05 00:56:33.401040 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-05 00:56:33.401052 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-05 00:56:33.401060 | orchestrator | 2026-04-05 00:56:33.401068 | orchestrator | TASK [haproxy-config : Add configuration for keystone when using single external frontend] *** 2026-04-05 00:56:33.401083 | orchestrator | Sunday 05 April 2026 00:52:45 +0000 (0:00:03.889) 0:03:14.144 ********** 2026-04-05 00:56:33.401092 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-04-05 00:56:33.401101 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-05 00:56:33.401109 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-05 00:56:33.401117 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:56:33.401132 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-04-05 00:56:33.401144 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-05 00:56:33.401157 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-04-05 00:56:33.401166 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-05 00:56:33.401174 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:56:33.401182 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-05 00:56:33.401209 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-05 00:56:33.401217 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:56:33.401225 | orchestrator | 2026-04-05 00:56:33.401238 | orchestrator | TASK [haproxy-config : Configuring firewall for keystone] ********************** 2026-04-05 00:56:33.401246 | orchestrator | Sunday 05 April 2026 00:52:46 +0000 (0:00:00.729) 0:03:14.873 ********** 2026-04-05 00:56:33.401254 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-04-05 00:56:33.401263 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-04-05 00:56:33.401271 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:56:33.401284 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-04-05 00:56:33.401296 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-04-05 00:56:33.401305 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-04-05 00:56:33.401313 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:56:33.401321 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-04-05 00:56:33.401329 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:56:33.401337 | orchestrator | 2026-04-05 00:56:33.401345 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL users config] *********** 2026-04-05 00:56:33.401352 | orchestrator | Sunday 05 April 2026 00:52:47 +0000 (0:00:01.231) 0:03:16.105 ********** 2026-04-05 00:56:33.401360 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:56:33.401368 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:56:33.401376 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:56:33.401384 | orchestrator | 2026-04-05 00:56:33.401392 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL rules config] *********** 2026-04-05 00:56:33.401400 | orchestrator | Sunday 05 April 2026 00:52:48 +0000 (0:00:01.500) 0:03:17.606 ********** 2026-04-05 00:56:33.401407 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:56:33.401415 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:56:33.401423 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:56:33.401431 | orchestrator | 2026-04-05 00:56:33.401439 | orchestrator | TASK [include_role : letsencrypt] ********************************************** 2026-04-05 00:56:33.401447 | orchestrator | Sunday 05 April 2026 00:52:51 +0000 (0:00:02.427) 0:03:20.033 ********** 2026-04-05 00:56:33.401455 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:56:33.401463 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:56:33.401470 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:56:33.401478 | orchestrator | 2026-04-05 00:56:33.401486 | orchestrator | TASK [include_role : magnum] *************************************************** 2026-04-05 00:56:33.401494 | orchestrator | Sunday 05 April 2026 00:52:51 +0000 (0:00:00.480) 0:03:20.514 ********** 2026-04-05 00:56:33.401502 | orchestrator | included: magnum for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 00:56:33.401510 | orchestrator | 2026-04-05 00:56:33.401517 | orchestrator | TASK [haproxy-config : Copying over magnum haproxy config] ********************* 2026-04-05 00:56:33.401525 | orchestrator | Sunday 05 April 2026 00:52:53 +0000 (0:00:01.355) 0:03:21.869 ********** 2026-04-05 00:56:33.401534 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-04-05 00:56:33.401551 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-04-05 00:56:33.401564 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-04-05 00:56:33.401573 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-04-05 00:56:33.401582 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-04-05 00:56:33.401590 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-04-05 00:56:33.401603 | orchestrator | 2026-04-05 00:56:33.401611 | orchestrator | TASK [haproxy-config : Add configuration for magnum when using single external frontend] *** 2026-04-05 00:56:33.401619 | orchestrator | Sunday 05 April 2026 00:52:57 +0000 (0:00:04.177) 0:03:26.046 ********** 2026-04-05 00:56:33.401633 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-04-05 00:56:33.401647 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-04-05 00:56:33.401655 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:56:33.401664 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-04-05 00:56:33.401672 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-04-05 00:56:33.401680 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:56:33.401692 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-04-05 00:56:33.401706 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-04-05 00:56:33.401714 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:56:33.401722 | orchestrator | 2026-04-05 00:56:33.401730 | orchestrator | TASK [haproxy-config : Configuring firewall for magnum] ************************ 2026-04-05 00:56:33.401738 | orchestrator | Sunday 05 April 2026 00:52:58 +0000 (0:00:00.791) 0:03:26.838 ********** 2026-04-05 00:56:33.401750 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2026-04-05 00:56:33.401758 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2026-04-05 00:56:33.401766 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:56:33.401774 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2026-04-05 00:56:33.401782 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2026-04-05 00:56:33.401790 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:56:33.401798 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2026-04-05 00:56:33.401806 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2026-04-05 00:56:33.401814 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:56:33.401822 | orchestrator | 2026-04-05 00:56:33.401830 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL users config] ************* 2026-04-05 00:56:33.401838 | orchestrator | Sunday 05 April 2026 00:52:59 +0000 (0:00:01.475) 0:03:28.314 ********** 2026-04-05 00:56:33.401845 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:56:33.401853 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:56:33.401861 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:56:33.401869 | orchestrator | 2026-04-05 00:56:33.401877 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL rules config] ************* 2026-04-05 00:56:33.401884 | orchestrator | Sunday 05 April 2026 00:53:01 +0000 (0:00:01.664) 0:03:29.978 ********** 2026-04-05 00:56:33.401897 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:56:33.401905 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:56:33.401913 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:56:33.401920 | orchestrator | 2026-04-05 00:56:33.401928 | orchestrator | TASK [include_role : manila] *************************************************** 2026-04-05 00:56:33.401936 | orchestrator | Sunday 05 April 2026 00:53:03 +0000 (0:00:02.274) 0:03:32.253 ********** 2026-04-05 00:56:33.401944 | orchestrator | included: manila for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 00:56:33.401951 | orchestrator | 2026-04-05 00:56:33.401959 | orchestrator | TASK [haproxy-config : Copying over manila haproxy config] ********************* 2026-04-05 00:56:33.401967 | orchestrator | Sunday 05 April 2026 00:53:04 +0000 (0:00:01.082) 0:03:33.336 ********** 2026-04-05 00:56:33.401979 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-04-05 00:56:33.401988 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-04-05 00:56:33.402000 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-04-05 00:56:33.402008 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-04-05 00:56:33.402152 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-04-05 00:56:33.402172 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-04-05 00:56:33.402180 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-04-05 00:56:33.402245 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-04-05 00:56:33.402260 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-04-05 00:56:33.402269 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-04-05 00:56:33.402277 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-04-05 00:56:33.402291 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-04-05 00:56:33.402299 | orchestrator | 2026-04-05 00:56:33.402308 | orchestrator | TASK [haproxy-config : Add configuration for manila when using single external frontend] *** 2026-04-05 00:56:33.402316 | orchestrator | Sunday 05 April 2026 00:53:09 +0000 (0:00:04.515) 0:03:37.851 ********** 2026-04-05 00:56:33.402329 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-04-05 00:56:33.402337 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-04-05 00:56:33.402349 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-04-05 00:56:33.402358 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-04-05 00:56:33.402371 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:56:33.402379 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-04-05 00:56:33.402387 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-04-05 00:56:33.402396 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-04-05 00:56:33.402408 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-04-05 00:56:33.402416 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:56:33.402428 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-04-05 00:56:33.402437 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-04-05 00:56:33.402451 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-04-05 00:56:33.402459 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-04-05 00:56:33.402468 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:56:33.402476 | orchestrator | 2026-04-05 00:56:33.402484 | orchestrator | TASK [haproxy-config : Configuring firewall for manila] ************************ 2026-04-05 00:56:33.402492 | orchestrator | Sunday 05 April 2026 00:53:09 +0000 (0:00:00.695) 0:03:38.547 ********** 2026-04-05 00:56:33.402500 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2026-04-05 00:56:33.402508 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2026-04-05 00:56:33.402516 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:56:33.402528 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2026-04-05 00:56:33.402536 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2026-04-05 00:56:33.402544 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:56:33.402552 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2026-04-05 00:56:33.402560 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2026-04-05 00:56:33.402568 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:56:33.402576 | orchestrator | 2026-04-05 00:56:33.402584 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL users config] ************* 2026-04-05 00:56:33.402592 | orchestrator | Sunday 05 April 2026 00:53:10 +0000 (0:00:00.888) 0:03:39.436 ********** 2026-04-05 00:56:33.402600 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:56:33.402608 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:56:33.402616 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:56:33.402623 | orchestrator | 2026-04-05 00:56:33.402631 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL rules config] ************* 2026-04-05 00:56:33.402647 | orchestrator | Sunday 05 April 2026 00:53:12 +0000 (0:00:01.378) 0:03:40.814 ********** 2026-04-05 00:56:33.402655 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:56:33.402663 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:56:33.402671 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:56:33.402678 | orchestrator | 2026-04-05 00:56:33.402686 | orchestrator | TASK [include_role : mariadb] ************************************************** 2026-04-05 00:56:33.402694 | orchestrator | Sunday 05 April 2026 00:53:14 +0000 (0:00:02.238) 0:03:43.053 ********** 2026-04-05 00:56:33.402702 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 00:56:33.402710 | orchestrator | 2026-04-05 00:56:33.402718 | orchestrator | TASK [mariadb : Ensure mysql monitor user exist] ******************************* 2026-04-05 00:56:33.402725 | orchestrator | Sunday 05 April 2026 00:53:15 +0000 (0:00:01.341) 0:03:44.395 ********** 2026-04-05 00:56:33.402733 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-04-05 00:56:33.402741 | orchestrator | 2026-04-05 00:56:33.402749 | orchestrator | TASK [haproxy-config : Copying over mariadb haproxy config] ******************** 2026-04-05 00:56:33.402758 | orchestrator | Sunday 05 April 2026 00:53:19 +0000 (0:00:03.507) 0:03:47.903 ********** 2026-04-05 00:56:33.402769 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-05 00:56:33.402784 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-04-05 00:56:33.402792 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:56:33.402804 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-05 00:56:33.402820 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-04-05 00:56:33.402828 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:56:33.402841 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-05 00:56:33.402854 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-04-05 00:56:33.402865 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:56:33.402873 | orchestrator | 2026-04-05 00:56:33.402881 | orchestrator | TASK [haproxy-config : Add configuration for mariadb when using single external frontend] *** 2026-04-05 00:56:33.402889 | orchestrator | Sunday 05 April 2026 00:53:22 +0000 (0:00:02.843) 0:03:50.746 ********** 2026-04-05 00:56:33.402898 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-05 00:56:33.402906 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-04-05 00:56:33.402914 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:56:33.402931 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-05 00:56:33.402944 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-04-05 00:56:33.402953 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:56:33.402961 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-05 00:56:33.402974 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-04-05 00:56:33.402986 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:56:33.402996 | orchestrator | 2026-04-05 00:56:33.403008 | orchestrator | TASK [haproxy-config : Configuring firewall for mariadb] *********************** 2026-04-05 00:56:33.403020 | orchestrator | Sunday 05 April 2026 00:53:25 +0000 (0:00:03.301) 0:03:54.048 ********** 2026-04-05 00:56:33.403036 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-04-05 00:56:33.403049 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-04-05 00:56:33.403060 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:56:33.403072 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-04-05 00:56:33.403084 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-04-05 00:56:33.403096 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:56:33.403108 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-04-05 00:56:33.403126 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-04-05 00:56:33.403144 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:56:33.403156 | orchestrator | 2026-04-05 00:56:33.403167 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL users config] ************ 2026-04-05 00:56:33.403179 | orchestrator | Sunday 05 April 2026 00:53:27 +0000 (0:00:02.501) 0:03:56.550 ********** 2026-04-05 00:56:33.403204 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:56:33.403215 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:56:33.403227 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:56:33.403237 | orchestrator | 2026-04-05 00:56:33.403244 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL rules config] ************ 2026-04-05 00:56:33.403250 | orchestrator | Sunday 05 April 2026 00:53:30 +0000 (0:00:02.150) 0:03:58.700 ********** 2026-04-05 00:56:33.403257 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:56:33.403264 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:56:33.403270 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:56:33.403277 | orchestrator | 2026-04-05 00:56:33.403284 | orchestrator | TASK [include_role : masakari] ************************************************* 2026-04-05 00:56:33.403290 | orchestrator | Sunday 05 April 2026 00:53:31 +0000 (0:00:01.885) 0:04:00.586 ********** 2026-04-05 00:56:33.403297 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:56:33.403303 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:56:33.403310 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:56:33.403316 | orchestrator | 2026-04-05 00:56:33.403327 | orchestrator | TASK [include_role : memcached] ************************************************ 2026-04-05 00:56:33.403334 | orchestrator | Sunday 05 April 2026 00:53:32 +0000 (0:00:00.356) 0:04:00.943 ********** 2026-04-05 00:56:33.403341 | orchestrator | included: memcached for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 00:56:33.403347 | orchestrator | 2026-04-05 00:56:33.403354 | orchestrator | TASK [haproxy-config : Copying over memcached haproxy config] ****************** 2026-04-05 00:56:33.403361 | orchestrator | Sunday 05 April 2026 00:53:33 +0000 (0:00:01.515) 0:04:02.458 ********** 2026-04-05 00:56:33.403368 | orchestrator | changed: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-04-05 00:56:33.403376 | orchestrator | changed: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-04-05 00:56:33.403383 | orchestrator | changed: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-04-05 00:56:33.403395 | orchestrator | 2026-04-05 00:56:33.403402 | orchestrator | TASK [haproxy-config : Add configuration for memcached when using single external frontend] *** 2026-04-05 00:56:33.403409 | orchestrator | Sunday 05 April 2026 00:53:35 +0000 (0:00:01.496) 0:04:03.955 ********** 2026-04-05 00:56:33.403421 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-04-05 00:56:33.403432 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-04-05 00:56:33.403439 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:56:33.403445 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:56:33.403453 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-04-05 00:56:33.403459 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:56:33.403466 | orchestrator | 2026-04-05 00:56:33.403473 | orchestrator | TASK [haproxy-config : Configuring firewall for memcached] ********************* 2026-04-05 00:56:33.403479 | orchestrator | Sunday 05 April 2026 00:53:35 +0000 (0:00:00.446) 0:04:04.402 ********** 2026-04-05 00:56:33.403487 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2026-04-05 00:56:33.403498 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:56:33.403505 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2026-04-05 00:56:33.403512 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:56:33.403518 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2026-04-05 00:56:33.403525 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:56:33.403532 | orchestrator | 2026-04-05 00:56:33.403539 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL users config] ********** 2026-04-05 00:56:33.403545 | orchestrator | Sunday 05 April 2026 00:53:36 +0000 (0:00:01.041) 0:04:05.443 ********** 2026-04-05 00:56:33.403552 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:56:33.403558 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:56:33.403565 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:56:33.403571 | orchestrator | 2026-04-05 00:56:33.403578 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL rules config] ********** 2026-04-05 00:56:33.403585 | orchestrator | Sunday 05 April 2026 00:53:37 +0000 (0:00:00.408) 0:04:05.851 ********** 2026-04-05 00:56:33.403591 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:56:33.403598 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:56:33.403605 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:56:33.403611 | orchestrator | 2026-04-05 00:56:33.403618 | orchestrator | TASK [include_role : mistral] ************************************************** 2026-04-05 00:56:33.403625 | orchestrator | Sunday 05 April 2026 00:53:38 +0000 (0:00:01.256) 0:04:07.107 ********** 2026-04-05 00:56:33.403631 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:56:33.403638 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:56:33.403645 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:56:33.403655 | orchestrator | 2026-04-05 00:56:33.403662 | orchestrator | TASK [include_role : neutron] ************************************************** 2026-04-05 00:56:33.403669 | orchestrator | Sunday 05 April 2026 00:53:38 +0000 (0:00:00.323) 0:04:07.431 ********** 2026-04-05 00:56:33.403676 | orchestrator | included: neutron for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 00:56:33.403682 | orchestrator | 2026-04-05 00:56:33.403689 | orchestrator | TASK [haproxy-config : Copying over neutron haproxy config] ******************** 2026-04-05 00:56:33.403695 | orchestrator | Sunday 05 April 2026 00:53:40 +0000 (0:00:01.421) 0:04:08.852 ********** 2026-04-05 00:56:33.403706 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-04-05 00:56:33.403713 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-04-05 00:56:33.403725 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-04-05 00:56:33.403733 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-04-05 00:56:33.403744 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-04-05 00:56:33.403754 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-04-05 00:56:33.403762 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-04-05 00:56:33.403784 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-04-05 00:56:33.403797 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-04-05 00:56:33.403815 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-04-05 00:56:33.403828 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-04-05 00:56:33.403839 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-04-05 00:56:33.403858 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-04-05 00:56:33.403872 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-04-05 00:56:33.403881 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-04-05 00:56:33.403889 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-04-05 00:56:33.403900 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-04-05 00:56:33.403908 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-04-05 00:56:33.403929 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-04-05 00:56:33.403941 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-04-05 00:56:33.403948 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-05 00:56:33.403956 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-04-05 00:56:33.403965 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-04-05 00:56:33.403973 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-04-05 00:56:33.403982 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-04-05 00:56:33.403993 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-04-05 00:56:33.404000 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-04-05 00:56:33.404007 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-05 00:56:33.404014 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-04-05 00:56:33.404025 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-04-05 00:56:33.404035 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-04-05 00:56:33.404046 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-04-05 00:56:33.404054 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-04-05 00:56:33.404062 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-04-05 00:56:33.404069 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-04-05 00:56:33.404232 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-04-05 00:56:33.404254 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-04-05 00:56:33.404281 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-04-05 00:56:33.404294 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-04-05 00:56:33.404308 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-05 00:56:33.404320 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-04-05 00:56:33.404387 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-04-05 00:56:33.404399 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-04-05 00:56:33.404417 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-04-05 00:56:33.404425 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-04-05 00:56:33.404432 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-04-05 00:56:33.404439 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-04-05 00:56:33.404494 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-04-05 00:56:33.404505 | orchestrator | 2026-04-05 00:56:33.404512 | orchestrator | TASK [haproxy-config : Add configuration for neutron when using single external frontend] *** 2026-04-05 00:56:33.404524 | orchestrator | Sunday 05 April 2026 00:53:44 +0000 (0:00:04.130) 0:04:12.982 ********** 2026-04-05 00:56:33.404535 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-04-05 00:56:33.404542 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-04-05 00:56:33.404549 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-04-05 00:56:33.404556 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-04-05 00:56:33.404596 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-04-05 00:56:33.404612 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-04-05 00:56:33.404623 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-04-05 00:56:33.404631 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-04-05 00:56:33.404638 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-04-05 00:56:33.404644 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-05 00:56:33.404652 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-04-05 00:56:33.404688 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-04-05 00:56:33.404705 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-04-05 00:56:33.404713 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-04-05 00:56:33.404720 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-04-05 00:56:33.404727 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-04-05 00:56:33.404734 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-04-05 00:56:33.404789 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-04-05 00:56:33.404802 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-04-05 00:56:33.404810 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-04-05 00:56:33.404817 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-04-05 00:56:33.404824 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-04-05 00:56:33.404879 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-04-05 00:56:33.404889 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-04-05 00:56:33.404900 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-04-05 00:56:33.404907 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:56:33.404914 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-04-05 00:56:33.404921 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-04-05 00:56:33.404928 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-04-05 00:56:33.404976 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-04-05 00:56:33.404991 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-05 00:56:33.405001 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-04-05 00:56:33.405008 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-04-05 00:56:33.405015 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-04-05 00:56:33.405022 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-04-05 00:56:33.405034 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-04-05 00:56:33.405074 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-04-05 00:56:33.405088 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-04-05 00:56:33.405105 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-04-05 00:56:33.405117 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-04-05 00:56:33.405130 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-04-05 00:56:33.405144 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-05 00:56:33.405231 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-04-05 00:56:33.405243 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:56:33.405250 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-04-05 00:56:33.405262 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-04-05 00:56:33.405269 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-04-05 00:56:33.405276 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-04-05 00:56:33.405283 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-04-05 00:56:33.405315 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-04-05 00:56:33.405323 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:56:33.405330 | orchestrator | 2026-04-05 00:56:33.405337 | orchestrator | TASK [haproxy-config : Configuring firewall for neutron] *********************** 2026-04-05 00:56:33.405344 | orchestrator | Sunday 05 April 2026 00:53:45 +0000 (0:00:01.687) 0:04:14.669 ********** 2026-04-05 00:56:33.405351 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2026-04-05 00:56:33.405358 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2026-04-05 00:56:33.405364 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:56:33.405371 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2026-04-05 00:56:33.405383 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2026-04-05 00:56:33.405390 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:56:33.405397 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2026-04-05 00:56:33.405404 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2026-04-05 00:56:33.405410 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:56:33.405417 | orchestrator | 2026-04-05 00:56:33.405424 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL users config] ************ 2026-04-05 00:56:33.405430 | orchestrator | Sunday 05 April 2026 00:53:47 +0000 (0:00:01.495) 0:04:16.165 ********** 2026-04-05 00:56:33.405437 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:56:33.405443 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:56:33.405450 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:56:33.405457 | orchestrator | 2026-04-05 00:56:33.405463 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL rules config] ************ 2026-04-05 00:56:33.405470 | orchestrator | Sunday 05 April 2026 00:53:48 +0000 (0:00:01.361) 0:04:17.527 ********** 2026-04-05 00:56:33.405477 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:56:33.405488 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:56:33.405495 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:56:33.405502 | orchestrator | 2026-04-05 00:56:33.405508 | orchestrator | TASK [include_role : placement] ************************************************ 2026-04-05 00:56:33.405515 | orchestrator | Sunday 05 April 2026 00:53:51 +0000 (0:00:02.242) 0:04:19.770 ********** 2026-04-05 00:56:33.405521 | orchestrator | included: placement for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 00:56:33.405528 | orchestrator | 2026-04-05 00:56:33.405534 | orchestrator | TASK [haproxy-config : Copying over placement haproxy config] ****************** 2026-04-05 00:56:33.405541 | orchestrator | Sunday 05 April 2026 00:53:52 +0000 (0:00:01.535) 0:04:21.305 ********** 2026-04-05 00:56:33.405548 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-04-05 00:56:33.405573 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-04-05 00:56:33.405584 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-04-05 00:56:33.405592 | orchestrator | 2026-04-05 00:56:33.405598 | orchestrator | TASK [haproxy-config : Add configuration for placement when using single external frontend] *** 2026-04-05 00:56:33.405605 | orchestrator | Sunday 05 April 2026 00:53:56 +0000 (0:00:03.438) 0:04:24.743 ********** 2026-04-05 00:56:33.405612 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-04-05 00:56:33.405623 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:56:33.405630 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-04-05 00:56:33.405637 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:56:33.405661 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-04-05 00:56:33.405669 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:56:33.405676 | orchestrator | 2026-04-05 00:56:33.405683 | orchestrator | TASK [haproxy-config : Configuring firewall for placement] ********************* 2026-04-05 00:56:33.405689 | orchestrator | Sunday 05 April 2026 00:53:56 +0000 (0:00:00.512) 0:04:25.256 ********** 2026-04-05 00:56:33.405696 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-04-05 00:56:33.405703 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-04-05 00:56:33.405710 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:56:33.405717 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-04-05 00:56:33.405728 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-04-05 00:56:33.405737 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:56:33.405749 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-04-05 00:56:33.405756 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-04-05 00:56:33.405764 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:56:33.405771 | orchestrator | 2026-04-05 00:56:33.405778 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL users config] ********** 2026-04-05 00:56:33.405785 | orchestrator | Sunday 05 April 2026 00:53:58 +0000 (0:00:01.511) 0:04:26.768 ********** 2026-04-05 00:56:33.405792 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:56:33.405799 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:56:33.405806 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:56:33.405813 | orchestrator | 2026-04-05 00:56:33.405821 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL rules config] ********** 2026-04-05 00:56:33.405828 | orchestrator | Sunday 05 April 2026 00:53:59 +0000 (0:00:01.332) 0:04:28.100 ********** 2026-04-05 00:56:33.405835 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:56:33.405842 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:56:33.405849 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:56:33.405856 | orchestrator | 2026-04-05 00:56:33.405863 | orchestrator | TASK [include_role : nova] ***************************************************** 2026-04-05 00:56:33.405870 | orchestrator | Sunday 05 April 2026 00:54:01 +0000 (0:00:01.884) 0:04:29.985 ********** 2026-04-05 00:56:33.405877 | orchestrator | included: nova for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 00:56:33.405884 | orchestrator | 2026-04-05 00:56:33.405891 | orchestrator | TASK [haproxy-config : Copying over nova haproxy config] *********************** 2026-04-05 00:56:33.405899 | orchestrator | Sunday 05 April 2026 00:54:02 +0000 (0:00:01.363) 0:04:31.349 ********** 2026-04-05 00:56:33.405912 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-04-05 00:56:33.405960 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-05 00:56:33.405979 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-05 00:56:33.405997 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-04-05 00:56:33.406009 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-05 00:56:33.406052 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-05 00:56:33.406092 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-04-05 00:56:33.406111 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-05 00:56:33.406118 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-05 00:56:33.406124 | orchestrator | 2026-04-05 00:56:33.406130 | orchestrator | TASK [haproxy-config : Add configuration for nova when using single external frontend] *** 2026-04-05 00:56:33.406137 | orchestrator | Sunday 05 April 2026 00:54:07 +0000 (0:00:05.306) 0:04:36.655 ********** 2026-04-05 00:56:33.406143 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-04-05 00:56:33.406150 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-05 00:56:33.406176 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-05 00:56:33.406202 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:56:33.406213 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-04-05 00:56:33.406220 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-04-05 00:56:33.406228 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-05 00:56:33.406234 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-05 00:56:33.406259 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-05 00:56:33.406271 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:56:33.406280 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-05 00:56:33.406287 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:56:33.406293 | orchestrator | 2026-04-05 00:56:33.406299 | orchestrator | TASK [haproxy-config : Configuring firewall for nova] ************************** 2026-04-05 00:56:33.406305 | orchestrator | Sunday 05 April 2026 00:54:08 +0000 (0:00:00.691) 0:04:37.347 ********** 2026-04-05 00:56:33.406312 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-04-05 00:56:33.406318 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-04-05 00:56:33.406326 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-04-05 00:56:33.406332 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-04-05 00:56:33.406339 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:56:33.406345 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-04-05 00:56:33.406351 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-04-05 00:56:33.406357 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-04-05 00:56:33.406364 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-04-05 00:56:33.406370 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-04-05 00:56:33.406377 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-04-05 00:56:33.406383 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:56:33.406393 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-04-05 00:56:33.406399 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-04-05 00:56:33.406423 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:56:33.406431 | orchestrator | 2026-04-05 00:56:33.406437 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL users config] *************** 2026-04-05 00:56:33.406443 | orchestrator | Sunday 05 April 2026 00:54:09 +0000 (0:00:01.003) 0:04:38.351 ********** 2026-04-05 00:56:33.406449 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:56:33.406456 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:56:33.406462 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:56:33.406468 | orchestrator | 2026-04-05 00:56:33.406474 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL rules config] *************** 2026-04-05 00:56:33.406480 | orchestrator | Sunday 05 April 2026 00:54:11 +0000 (0:00:01.884) 0:04:40.235 ********** 2026-04-05 00:56:33.406486 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:56:33.406492 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:56:33.406498 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:56:33.406505 | orchestrator | 2026-04-05 00:56:33.406511 | orchestrator | TASK [include_role : nova-cell] ************************************************ 2026-04-05 00:56:33.406517 | orchestrator | Sunday 05 April 2026 00:54:13 +0000 (0:00:02.189) 0:04:42.425 ********** 2026-04-05 00:56:33.406523 | orchestrator | included: nova-cell for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 00:56:33.406529 | orchestrator | 2026-04-05 00:56:33.406535 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-novncproxy] ****************** 2026-04-05 00:56:33.406541 | orchestrator | Sunday 05 April 2026 00:54:15 +0000 (0:00:01.310) 0:04:43.735 ********** 2026-04-05 00:56:33.406553 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-novncproxy) 2026-04-05 00:56:33.406560 | orchestrator | 2026-04-05 00:56:33.406566 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config] *** 2026-04-05 00:56:33.406572 | orchestrator | Sunday 05 April 2026 00:54:16 +0000 (0:00:01.451) 0:04:45.186 ********** 2026-04-05 00:56:33.406579 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2026-04-05 00:56:33.406586 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2026-04-05 00:56:33.406592 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2026-04-05 00:56:33.406602 | orchestrator | 2026-04-05 00:56:33.406608 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-novncproxy when using single external frontend] *** 2026-04-05 00:56:33.406614 | orchestrator | Sunday 05 April 2026 00:54:20 +0000 (0:00:04.044) 0:04:49.231 ********** 2026-04-05 00:56:33.406621 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-04-05 00:56:33.406627 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:56:33.406633 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-04-05 00:56:33.406656 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:56:33.406664 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-04-05 00:56:33.406671 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:56:33.406677 | orchestrator | 2026-04-05 00:56:33.406683 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-novncproxy] ***** 2026-04-05 00:56:33.406689 | orchestrator | Sunday 05 April 2026 00:54:22 +0000 (0:00:01.489) 0:04:50.720 ********** 2026-04-05 00:56:33.406695 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-04-05 00:56:33.406704 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-04-05 00:56:33.406711 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:56:33.406717 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-04-05 00:56:33.406724 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-04-05 00:56:33.406730 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:56:33.406736 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-04-05 00:56:33.406743 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-04-05 00:56:33.406753 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:56:33.406759 | orchestrator | 2026-04-05 00:56:33.406765 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2026-04-05 00:56:33.406771 | orchestrator | Sunday 05 April 2026 00:54:24 +0000 (0:00:02.022) 0:04:52.743 ********** 2026-04-05 00:56:33.406777 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:56:33.406783 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:56:33.406789 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:56:33.406795 | orchestrator | 2026-04-05 00:56:33.406801 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2026-04-05 00:56:33.406808 | orchestrator | Sunday 05 April 2026 00:54:26 +0000 (0:00:02.472) 0:04:55.216 ********** 2026-04-05 00:56:33.406814 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:56:33.406820 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:56:33.406826 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:56:33.406832 | orchestrator | 2026-04-05 00:56:33.406838 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-spicehtml5proxy] ************* 2026-04-05 00:56:33.406844 | orchestrator | Sunday 05 April 2026 00:54:29 +0000 (0:00:03.124) 0:04:58.340 ********** 2026-04-05 00:56:33.406851 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-spicehtml5proxy) 2026-04-05 00:56:33.406857 | orchestrator | 2026-04-05 00:56:33.406863 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-spicehtml5proxy haproxy config] *** 2026-04-05 00:56:33.406869 | orchestrator | Sunday 05 April 2026 00:54:30 +0000 (0:00:00.872) 0:04:59.213 ********** 2026-04-05 00:56:33.406875 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-04-05 00:56:33.406882 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:56:33.406905 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-04-05 00:56:33.406913 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:56:33.406920 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-04-05 00:56:33.406926 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:56:33.406932 | orchestrator | 2026-04-05 00:56:33.406941 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-spicehtml5proxy when using single external frontend] *** 2026-04-05 00:56:33.406948 | orchestrator | Sunday 05 April 2026 00:54:31 +0000 (0:00:01.383) 0:05:00.597 ********** 2026-04-05 00:56:33.406954 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-04-05 00:56:33.406964 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:56:33.406970 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-04-05 00:56:33.406977 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:56:33.406983 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-04-05 00:56:33.406989 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:56:33.406995 | orchestrator | 2026-04-05 00:56:33.407001 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-spicehtml5proxy] *** 2026-04-05 00:56:33.407007 | orchestrator | Sunday 05 April 2026 00:54:33 +0000 (0:00:01.734) 0:05:02.331 ********** 2026-04-05 00:56:33.407013 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:56:33.407020 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:56:33.407026 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:56:33.407032 | orchestrator | 2026-04-05 00:56:33.407038 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2026-04-05 00:56:33.407044 | orchestrator | Sunday 05 April 2026 00:54:34 +0000 (0:00:01.303) 0:05:03.634 ********** 2026-04-05 00:56:33.407050 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:56:33.407060 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:56:33.407072 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:56:33.407083 | orchestrator | 2026-04-05 00:56:33.407094 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2026-04-05 00:56:33.407106 | orchestrator | Sunday 05 April 2026 00:54:37 +0000 (0:00:02.455) 0:05:06.090 ********** 2026-04-05 00:56:33.407116 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:56:33.407128 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:56:33.407138 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:56:33.407149 | orchestrator | 2026-04-05 00:56:33.407161 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-serialproxy] ***************** 2026-04-05 00:56:33.407173 | orchestrator | Sunday 05 April 2026 00:54:40 +0000 (0:00:03.095) 0:05:09.186 ********** 2026-04-05 00:56:33.407199 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-serialproxy) 2026-04-05 00:56:33.407210 | orchestrator | 2026-04-05 00:56:33.407254 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-serialproxy haproxy config] *** 2026-04-05 00:56:33.407266 | orchestrator | Sunday 05 April 2026 00:54:41 +0000 (0:00:00.839) 0:05:10.026 ********** 2026-04-05 00:56:33.407277 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-04-05 00:56:33.407298 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:56:33.407313 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-04-05 00:56:33.407323 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:56:33.407329 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-04-05 00:56:33.407336 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:56:33.407342 | orchestrator | 2026-04-05 00:56:33.407348 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-serialproxy when using single external frontend] *** 2026-04-05 00:56:33.407355 | orchestrator | Sunday 05 April 2026 00:54:42 +0000 (0:00:01.452) 0:05:11.478 ********** 2026-04-05 00:56:33.407361 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-04-05 00:56:33.407367 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:56:33.407374 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-04-05 00:56:33.407380 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:56:33.407386 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-04-05 00:56:33.407392 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:56:33.407399 | orchestrator | 2026-04-05 00:56:33.407405 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-serialproxy] **** 2026-04-05 00:56:33.407411 | orchestrator | Sunday 05 April 2026 00:54:44 +0000 (0:00:01.343) 0:05:12.822 ********** 2026-04-05 00:56:33.407417 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:56:33.407423 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:56:33.407430 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:56:33.407440 | orchestrator | 2026-04-05 00:56:33.407446 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2026-04-05 00:56:33.407474 | orchestrator | Sunday 05 April 2026 00:54:45 +0000 (0:00:01.552) 0:05:14.374 ********** 2026-04-05 00:56:33.407482 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:56:33.407488 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:56:33.407494 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:56:33.407500 | orchestrator | 2026-04-05 00:56:33.407506 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2026-04-05 00:56:33.407512 | orchestrator | Sunday 05 April 2026 00:54:48 +0000 (0:00:02.803) 0:05:17.177 ********** 2026-04-05 00:56:33.407518 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:56:33.407524 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:56:33.407530 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:56:33.407537 | orchestrator | 2026-04-05 00:56:33.407543 | orchestrator | TASK [include_role : octavia] ************************************************** 2026-04-05 00:56:33.407549 | orchestrator | Sunday 05 April 2026 00:54:51 +0000 (0:00:03.161) 0:05:20.338 ********** 2026-04-05 00:56:33.407555 | orchestrator | included: octavia for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 00:56:33.407561 | orchestrator | 2026-04-05 00:56:33.407567 | orchestrator | TASK [haproxy-config : Copying over octavia haproxy config] ******************** 2026-04-05 00:56:33.407573 | orchestrator | Sunday 05 April 2026 00:54:52 +0000 (0:00:01.293) 0:05:21.632 ********** 2026-04-05 00:56:33.407583 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-05 00:56:33.407591 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-05 00:56:33.407598 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-04-05 00:56:33.407605 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-04-05 00:56:33.407634 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-04-05 00:56:33.407643 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-04-05 00:56:33.407652 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-04-05 00:56:33.407659 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-04-05 00:56:33.407666 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-04-05 00:56:33.407672 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-04-05 00:56:33.407701 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-05 00:56:33.407709 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-04-05 00:56:33.407719 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-04-05 00:56:33.407725 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-04-05 00:56:33.407732 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-04-05 00:56:33.407738 | orchestrator | 2026-04-05 00:56:33.407745 | orchestrator | TASK [haproxy-config : Add configuration for octavia when using single external frontend] *** 2026-04-05 00:56:33.407751 | orchestrator | Sunday 05 April 2026 00:54:56 +0000 (0:00:03.673) 0:05:25.306 ********** 2026-04-05 00:56:33.407758 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-04-05 00:56:33.407768 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-04-05 00:56:33.407792 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-04-05 00:56:33.407806 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-04-05 00:56:33.407813 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-04-05 00:56:33.407819 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:56:33.407826 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-04-05 00:56:33.407836 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-04-05 00:56:33.407843 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-04-05 00:56:33.407866 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-04-05 00:56:33.407874 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-04-05 00:56:33.407880 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:56:33.407943 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-04-05 00:56:33.407960 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-04-05 00:56:33.407967 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-04-05 00:56:33.407978 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-04-05 00:56:33.408007 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-04-05 00:56:33.408015 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:56:33.408021 | orchestrator | 2026-04-05 00:56:33.408028 | orchestrator | TASK [haproxy-config : Configuring firewall for octavia] *********************** 2026-04-05 00:56:33.408034 | orchestrator | Sunday 05 April 2026 00:54:57 +0000 (0:00:01.079) 0:05:26.385 ********** 2026-04-05 00:56:33.408041 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-04-05 00:56:33.408047 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-04-05 00:56:33.408054 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:56:33.408060 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-04-05 00:56:33.408069 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-04-05 00:56:33.408076 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:56:33.408082 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-04-05 00:56:33.408088 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-04-05 00:56:33.408095 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:56:33.408101 | orchestrator | 2026-04-05 00:56:33.408107 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL users config] ************ 2026-04-05 00:56:33.408113 | orchestrator | Sunday 05 April 2026 00:54:58 +0000 (0:00:00.996) 0:05:27.382 ********** 2026-04-05 00:56:33.408119 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:56:33.408129 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:56:33.408136 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:56:33.408142 | orchestrator | 2026-04-05 00:56:33.408148 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL rules config] ************ 2026-04-05 00:56:33.408154 | orchestrator | Sunday 05 April 2026 00:55:00 +0000 (0:00:01.383) 0:05:28.766 ********** 2026-04-05 00:56:33.408160 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:56:33.408166 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:56:33.408172 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:56:33.408178 | orchestrator | 2026-04-05 00:56:33.408203 | orchestrator | TASK [include_role : opensearch] *********************************************** 2026-04-05 00:56:33.408209 | orchestrator | Sunday 05 April 2026 00:55:02 +0000 (0:00:02.352) 0:05:31.118 ********** 2026-04-05 00:56:33.408216 | orchestrator | included: opensearch for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 00:56:33.408222 | orchestrator | 2026-04-05 00:56:33.408228 | orchestrator | TASK [haproxy-config : Copying over opensearch haproxy config] ***************** 2026-04-05 00:56:33.408234 | orchestrator | Sunday 05 April 2026 00:55:04 +0000 (0:00:01.758) 0:05:32.877 ********** 2026-04-05 00:56:33.408241 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-04-05 00:56:33.408266 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-04-05 00:56:33.408278 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-04-05 00:56:33.408285 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-04-05 00:56:33.408300 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-04-05 00:56:33.408326 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-04-05 00:56:33.408334 | orchestrator | 2026-04-05 00:56:33.408340 | orchestrator | TASK [haproxy-config : Add configuration for opensearch when using single external frontend] *** 2026-04-05 00:56:33.408346 | orchestrator | Sunday 05 April 2026 00:55:09 +0000 (0:00:05.493) 0:05:38.371 ********** 2026-04-05 00:56:33.408356 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-04-05 00:56:33.408367 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-04-05 00:56:33.408374 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:56:33.408381 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-04-05 00:56:33.408404 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-04-05 00:56:33.408412 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:56:33.408418 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-04-05 00:56:33.408428 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-04-05 00:56:33.408438 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:56:33.408445 | orchestrator | 2026-04-05 00:56:33.408451 | orchestrator | TASK [haproxy-config : Configuring firewall for opensearch] ******************** 2026-04-05 00:56:33.408458 | orchestrator | Sunday 05 April 2026 00:55:10 +0000 (0:00:00.818) 0:05:39.189 ********** 2026-04-05 00:56:33.408464 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2026-04-05 00:56:33.408470 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-04-05 00:56:33.408477 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-04-05 00:56:33.408484 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2026-04-05 00:56:33.408490 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:56:33.408497 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-04-05 00:56:33.408503 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-04-05 00:56:33.408510 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:56:33.408516 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2026-04-05 00:56:33.408543 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-04-05 00:56:33.408551 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-04-05 00:56:33.408558 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:56:33.408564 | orchestrator | 2026-04-05 00:56:33.408570 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL users config] ********* 2026-04-05 00:56:33.408576 | orchestrator | Sunday 05 April 2026 00:55:11 +0000 (0:00:01.131) 0:05:40.320 ********** 2026-04-05 00:56:33.408586 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:56:33.408592 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:56:33.408598 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:56:33.408605 | orchestrator | 2026-04-05 00:56:33.408611 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL rules config] ********* 2026-04-05 00:56:33.408617 | orchestrator | Sunday 05 April 2026 00:55:12 +0000 (0:00:00.424) 0:05:40.744 ********** 2026-04-05 00:56:33.408623 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:56:33.408629 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:56:33.408635 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:56:33.408641 | orchestrator | 2026-04-05 00:56:33.408647 | orchestrator | TASK [include_role : prometheus] *********************************************** 2026-04-05 00:56:33.408653 | orchestrator | Sunday 05 April 2026 00:55:13 +0000 (0:00:01.396) 0:05:42.140 ********** 2026-04-05 00:56:33.408663 | orchestrator | included: prometheus for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 00:56:33.408669 | orchestrator | 2026-04-05 00:56:33.408675 | orchestrator | TASK [haproxy-config : Copying over prometheus haproxy config] ***************** 2026-04-05 00:56:33.408681 | orchestrator | Sunday 05 April 2026 00:55:15 +0000 (0:00:01.755) 0:05:43.896 ********** 2026-04-05 00:56:33.408687 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-04-05 00:56:33.408694 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-05 00:56:33.408701 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 00:56:33.408708 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-04-05 00:56:33.408732 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 00:56:33.408743 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-05 00:56:33.408752 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-05 00:56:33.408759 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 00:56:33.408766 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 00:56:33.408773 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-05 00:56:33.408779 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-04-05 00:56:33.408804 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-05 00:56:33.408818 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 00:56:33.408824 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 00:56:33.408833 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-05 00:56:33.408840 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-04-05 00:56:33.408847 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-04-05 00:56:33.408854 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 00:56:33.408867 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 00:56:33.408874 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-04-05 00:56:33.408883 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-04-05 00:56:33.408890 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-04-05 00:56:33.408897 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 00:56:33.408903 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 00:56:33.408917 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-04-05 00:56:33.408924 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-04-05 00:56:33.408934 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-04-05 00:56:33.408940 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 00:56:33.408947 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 00:56:33.408953 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-04-05 00:56:33.408964 | orchestrator | 2026-04-05 00:56:33.408970 | orchestrator | TASK [haproxy-config : Add configuration for prometheus when using single external frontend] *** 2026-04-05 00:56:33.408977 | orchestrator | Sunday 05 April 2026 00:55:20 +0000 (0:00:04.843) 0:05:48.740 ********** 2026-04-05 00:56:33.408986 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2026-04-05 00:56:33.408993 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-05 00:56:33.409002 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 00:56:33.409009 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 00:56:33.409016 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-05 00:56:33.409022 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2026-04-05 00:56:33.409038 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-04-05 00:56:33.409045 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 00:56:33.409054 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2026-04-05 00:56:33.409061 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 00:56:33.409068 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-05 00:56:33.409074 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-04-05 00:56:33.409084 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:56:33.409091 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 00:56:33.409098 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 00:56:33.409108 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-05 00:56:33.409117 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2026-04-05 00:56:33.409124 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-04-05 00:56:33.409131 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 00:56:33.409140 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2026-04-05 00:56:33.409147 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 00:56:33.409157 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-05 00:56:33.409164 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-04-05 00:56:33.409170 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:56:33.409179 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 00:56:33.409222 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 00:56:33.409229 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-05 00:56:33.409241 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2026-04-05 00:56:33.409252 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-04-05 00:56:33.409259 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 00:56:33.409270 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 00:56:33.409277 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-04-05 00:56:33.409284 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:56:33.409290 | orchestrator | 2026-04-05 00:56:33.409296 | orchestrator | TASK [haproxy-config : Configuring firewall for prometheus] ******************** 2026-04-05 00:56:33.409302 | orchestrator | Sunday 05 April 2026 00:55:20 +0000 (0:00:00.929) 0:05:49.670 ********** 2026-04-05 00:56:33.409309 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2026-04-05 00:56:33.409319 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2026-04-05 00:56:33.409326 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-04-05 00:56:33.409333 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-04-05 00:56:33.409340 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:56:33.409346 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2026-04-05 00:56:33.409352 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2026-04-05 00:56:33.409359 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-04-05 00:56:33.409365 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-04-05 00:56:33.409371 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:56:33.409381 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2026-04-05 00:56:33.409387 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2026-04-05 00:56:33.409394 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-04-05 00:56:33.409400 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-04-05 00:56:33.409406 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:56:33.409413 | orchestrator | 2026-04-05 00:56:33.409422 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL users config] ********* 2026-04-05 00:56:33.409428 | orchestrator | Sunday 05 April 2026 00:55:22 +0000 (0:00:01.392) 0:05:51.062 ********** 2026-04-05 00:56:33.409434 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:56:33.409440 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:56:33.409446 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:56:33.409453 | orchestrator | 2026-04-05 00:56:33.409459 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL rules config] ********* 2026-04-05 00:56:33.409469 | orchestrator | Sunday 05 April 2026 00:55:22 +0000 (0:00:00.497) 0:05:51.559 ********** 2026-04-05 00:56:33.409476 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:56:33.409482 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:56:33.409488 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:56:33.409494 | orchestrator | 2026-04-05 00:56:33.409500 | orchestrator | TASK [include_role : rabbitmq] ************************************************* 2026-04-05 00:56:33.409507 | orchestrator | Sunday 05 April 2026 00:55:24 +0000 (0:00:01.226) 0:05:52.786 ********** 2026-04-05 00:56:33.409513 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 00:56:33.409519 | orchestrator | 2026-04-05 00:56:33.409525 | orchestrator | TASK [haproxy-config : Copying over rabbitmq haproxy config] ******************* 2026-04-05 00:56:33.409531 | orchestrator | Sunday 05 April 2026 00:55:25 +0000 (0:00:01.402) 0:05:54.188 ********** 2026-04-05 00:56:33.409538 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-04-05 00:56:33.409545 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-04-05 00:56:33.409555 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-04-05 00:56:33.409562 | orchestrator | 2026-04-05 00:56:33.409569 | orchestrator | TASK [haproxy-config : Add configuration for rabbitmq when using single external frontend] *** 2026-04-05 00:56:33.409578 | orchestrator | Sunday 05 April 2026 00:55:27 +0000 (0:00:02.315) 0:05:56.504 ********** 2026-04-05 00:56:33.409587 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-04-05 00:56:33.409593 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-04-05 00:56:33.409599 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:56:33.409604 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:56:33.409610 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-04-05 00:56:33.409616 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:56:33.409621 | orchestrator | 2026-04-05 00:56:33.409629 | orchestrator | TASK [haproxy-config : Configuring firewall for rabbitmq] ********************** 2026-04-05 00:56:33.409634 | orchestrator | Sunday 05 April 2026 00:55:28 +0000 (0:00:00.395) 0:05:56.900 ********** 2026-04-05 00:56:33.409640 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2026-04-05 00:56:33.409645 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:56:33.409651 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2026-04-05 00:56:33.409656 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:56:33.409665 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2026-04-05 00:56:33.409671 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:56:33.409676 | orchestrator | 2026-04-05 00:56:33.409682 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL users config] *********** 2026-04-05 00:56:33.409687 | orchestrator | Sunday 05 April 2026 00:55:28 +0000 (0:00:00.574) 0:05:57.474 ********** 2026-04-05 00:56:33.409692 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:56:33.409698 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:56:33.409703 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:56:33.409709 | orchestrator | 2026-04-05 00:56:33.409714 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL rules config] *********** 2026-04-05 00:56:33.409722 | orchestrator | Sunday 05 April 2026 00:55:29 +0000 (0:00:00.879) 0:05:58.354 ********** 2026-04-05 00:56:33.409728 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:56:33.409733 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:56:33.409739 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:56:33.409744 | orchestrator | 2026-04-05 00:56:33.409749 | orchestrator | TASK [include_role : skyline] ************************************************** 2026-04-05 00:56:33.409755 | orchestrator | Sunday 05 April 2026 00:55:31 +0000 (0:00:01.409) 0:05:59.764 ********** 2026-04-05 00:56:33.409760 | orchestrator | included: skyline for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 00:56:33.409765 | orchestrator | 2026-04-05 00:56:33.409771 | orchestrator | TASK [haproxy-config : Copying over skyline haproxy config] ******************** 2026-04-05 00:56:33.409776 | orchestrator | Sunday 05 April 2026 00:55:32 +0000 (0:00:01.616) 0:06:01.380 ********** 2026-04-05 00:56:33.409782 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-04-05 00:56:33.409788 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-04-05 00:56:33.409797 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-04-05 00:56:33.409809 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-04-05 00:56:33.409816 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-04-05 00:56:33.409822 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-04-05 00:56:33.409828 | orchestrator | 2026-04-05 00:56:33.409833 | orchestrator | TASK [haproxy-config : Add configuration for skyline when using single external frontend] *** 2026-04-05 00:56:33.409839 | orchestrator | Sunday 05 April 2026 00:55:38 +0000 (0:00:05.835) 0:06:07.215 ********** 2026-04-05 00:56:33.409847 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2026-04-05 00:56:33.409856 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2026-04-05 00:56:33.409864 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:56:33.409870 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2026-04-05 00:56:33.409876 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2026-04-05 00:56:33.409881 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:56:33.409887 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2026-04-05 00:56:33.409899 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2026-04-05 00:56:33.409904 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:56:33.409910 | orchestrator | 2026-04-05 00:56:33.409916 | orchestrator | TASK [haproxy-config : Configuring firewall for skyline] *********************** 2026-04-05 00:56:33.409921 | orchestrator | Sunday 05 April 2026 00:55:39 +0000 (0:00:01.010) 0:06:08.225 ********** 2026-04-05 00:56:33.409929 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-04-05 00:56:33.409934 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-04-05 00:56:33.409940 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-04-05 00:56:33.409946 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-04-05 00:56:33.409951 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:56:33.409957 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-04-05 00:56:33.409962 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-04-05 00:56:33.409968 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-04-05 00:56:33.409973 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-04-05 00:56:33.409979 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:56:33.409984 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-04-05 00:56:33.409990 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-04-05 00:56:33.410000 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-04-05 00:56:33.410006 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-04-05 00:56:33.410030 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:56:33.410038 | orchestrator | 2026-04-05 00:56:33.410044 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL users config] ************ 2026-04-05 00:56:33.410050 | orchestrator | Sunday 05 April 2026 00:55:40 +0000 (0:00:01.040) 0:06:09.266 ********** 2026-04-05 00:56:33.410055 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:56:33.410060 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:56:33.410066 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:56:33.410071 | orchestrator | 2026-04-05 00:56:33.410079 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL rules config] ************ 2026-04-05 00:56:33.410085 | orchestrator | Sunday 05 April 2026 00:55:41 +0000 (0:00:01.368) 0:06:10.635 ********** 2026-04-05 00:56:33.410090 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:56:33.410096 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:56:33.410101 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:56:33.410107 | orchestrator | 2026-04-05 00:56:33.410112 | orchestrator | TASK [include_role : swift] **************************************************** 2026-04-05 00:56:33.410117 | orchestrator | Sunday 05 April 2026 00:55:44 +0000 (0:00:02.278) 0:06:12.913 ********** 2026-04-05 00:56:33.410123 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:56:33.410128 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:56:33.410134 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:56:33.410139 | orchestrator | 2026-04-05 00:56:33.410144 | orchestrator | TASK [include_role : tacker] *************************************************** 2026-04-05 00:56:33.410150 | orchestrator | Sunday 05 April 2026 00:55:44 +0000 (0:00:00.667) 0:06:13.580 ********** 2026-04-05 00:56:33.410155 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:56:33.410161 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:56:33.410166 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:56:33.410172 | orchestrator | 2026-04-05 00:56:33.410177 | orchestrator | TASK [include_role : trove] **************************************************** 2026-04-05 00:56:33.410182 | orchestrator | Sunday 05 April 2026 00:55:45 +0000 (0:00:00.390) 0:06:13.971 ********** 2026-04-05 00:56:33.410196 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:56:33.410202 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:56:33.410207 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:56:33.410213 | orchestrator | 2026-04-05 00:56:33.410223 | orchestrator | TASK [include_role : venus] **************************************************** 2026-04-05 00:56:33.410229 | orchestrator | Sunday 05 April 2026 00:55:45 +0000 (0:00:00.374) 0:06:14.346 ********** 2026-04-05 00:56:33.410234 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:56:33.410239 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:56:33.410245 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:56:33.410250 | orchestrator | 2026-04-05 00:56:33.410256 | orchestrator | TASK [include_role : watcher] ************************************************** 2026-04-05 00:56:33.410261 | orchestrator | Sunday 05 April 2026 00:55:45 +0000 (0:00:00.317) 0:06:14.663 ********** 2026-04-05 00:56:33.410266 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:56:33.410272 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:56:33.410277 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:56:33.410282 | orchestrator | 2026-04-05 00:56:33.410288 | orchestrator | TASK [include_role : zun] ****************************************************** 2026-04-05 00:56:33.410297 | orchestrator | Sunday 05 April 2026 00:55:46 +0000 (0:00:00.653) 0:06:15.317 ********** 2026-04-05 00:56:33.410303 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:56:33.410308 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:56:33.410314 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:56:33.410319 | orchestrator | 2026-04-05 00:56:33.410325 | orchestrator | RUNNING HANDLER [loadbalancer : Check IP addresses on the API interface] ******* 2026-04-05 00:56:33.410330 | orchestrator | Sunday 05 April 2026 00:55:47 +0000 (0:00:00.625) 0:06:15.943 ********** 2026-04-05 00:56:33.410335 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:56:33.410341 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:56:33.410346 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:56:33.410352 | orchestrator | 2026-04-05 00:56:33.410357 | orchestrator | RUNNING HANDLER [loadbalancer : Group HA nodes by status] ********************** 2026-04-05 00:56:33.410363 | orchestrator | Sunday 05 April 2026 00:55:47 +0000 (0:00:00.721) 0:06:16.665 ********** 2026-04-05 00:56:33.410368 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:56:33.410373 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:56:33.410379 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:56:33.410384 | orchestrator | 2026-04-05 00:56:33.410390 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup keepalived container] ************** 2026-04-05 00:56:33.410395 | orchestrator | Sunday 05 April 2026 00:55:48 +0000 (0:00:00.676) 0:06:17.341 ********** 2026-04-05 00:56:33.410400 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:56:33.410406 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:56:33.410411 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:56:33.410416 | orchestrator | 2026-04-05 00:56:33.410422 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup haproxy container] ***************** 2026-04-05 00:56:33.410427 | orchestrator | Sunday 05 April 2026 00:55:49 +0000 (0:00:00.919) 0:06:18.260 ********** 2026-04-05 00:56:33.410433 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:56:33.410438 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:56:33.410443 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:56:33.410448 | orchestrator | 2026-04-05 00:56:33.410454 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup proxysql container] **************** 2026-04-05 00:56:33.410459 | orchestrator | Sunday 05 April 2026 00:55:50 +0000 (0:00:00.959) 0:06:19.219 ********** 2026-04-05 00:56:33.410464 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:56:33.410470 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:56:33.410475 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:56:33.410480 | orchestrator | 2026-04-05 00:56:33.410486 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup haproxy container] **************** 2026-04-05 00:56:33.410491 | orchestrator | Sunday 05 April 2026 00:55:51 +0000 (0:00:00.893) 0:06:20.112 ********** 2026-04-05 00:56:33.410497 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:56:33.410502 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:56:33.410508 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:56:33.410513 | orchestrator | 2026-04-05 00:56:33.410518 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup haproxy to start] ************** 2026-04-05 00:56:33.410524 | orchestrator | Sunday 05 April 2026 00:55:56 +0000 (0:00:05.413) 0:06:25.526 ********** 2026-04-05 00:56:33.410529 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:56:33.410535 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:56:33.410540 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:56:33.410545 | orchestrator | 2026-04-05 00:56:33.410551 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup proxysql container] *************** 2026-04-05 00:56:33.410556 | orchestrator | Sunday 05 April 2026 00:56:01 +0000 (0:00:04.157) 0:06:29.683 ********** 2026-04-05 00:56:33.410561 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:56:33.410567 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:56:33.410572 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:56:33.410578 | orchestrator | 2026-04-05 00:56:33.410583 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup proxysql to start] ************* 2026-04-05 00:56:33.410591 | orchestrator | Sunday 05 April 2026 00:56:16 +0000 (0:00:15.127) 0:06:44.811 ********** 2026-04-05 00:56:33.410597 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:56:33.410607 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:56:33.410612 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:56:33.410620 | orchestrator | 2026-04-05 00:56:33.410630 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup keepalived container] ************* 2026-04-05 00:56:33.410644 | orchestrator | Sunday 05 April 2026 00:56:16 +0000 (0:00:00.743) 0:06:45.555 ********** 2026-04-05 00:56:33.410655 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:56:33.410664 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:56:33.410673 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:56:33.410683 | orchestrator | 2026-04-05 00:56:33.410692 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master haproxy container] ***************** 2026-04-05 00:56:33.410700 | orchestrator | Sunday 05 April 2026 00:56:26 +0000 (0:00:09.638) 0:06:55.194 ********** 2026-04-05 00:56:33.410709 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:56:33.410719 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:56:33.410728 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:56:33.410737 | orchestrator | 2026-04-05 00:56:33.410746 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master proxysql container] **************** 2026-04-05 00:56:33.410755 | orchestrator | Sunday 05 April 2026 00:56:27 +0000 (0:00:00.577) 0:06:55.771 ********** 2026-04-05 00:56:33.410764 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:56:33.410774 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:56:33.410782 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:56:33.410791 | orchestrator | 2026-04-05 00:56:33.410800 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master keepalived container] ************** 2026-04-05 00:56:33.410814 | orchestrator | Sunday 05 April 2026 00:56:27 +0000 (0:00:00.345) 0:06:56.117 ********** 2026-04-05 00:56:33.410823 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:56:33.410831 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:56:33.410839 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:56:33.410848 | orchestrator | 2026-04-05 00:56:33.410858 | orchestrator | RUNNING HANDLER [loadbalancer : Start master haproxy container] **************** 2026-04-05 00:56:33.410867 | orchestrator | Sunday 05 April 2026 00:56:27 +0000 (0:00:00.305) 0:06:56.422 ********** 2026-04-05 00:56:33.410876 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:56:33.410885 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:56:33.410894 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:56:33.410904 | orchestrator | 2026-04-05 00:56:33.410914 | orchestrator | RUNNING HANDLER [loadbalancer : Start master proxysql container] *************** 2026-04-05 00:56:33.410924 | orchestrator | Sunday 05 April 2026 00:56:28 +0000 (0:00:00.330) 0:06:56.752 ********** 2026-04-05 00:56:33.410933 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:56:33.410942 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:56:33.410952 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:56:33.410962 | orchestrator | 2026-04-05 00:56:33.410972 | orchestrator | RUNNING HANDLER [loadbalancer : Start master keepalived container] ************* 2026-04-05 00:56:33.410982 | orchestrator | Sunday 05 April 2026 00:56:28 +0000 (0:00:00.535) 0:06:57.288 ********** 2026-04-05 00:56:33.410992 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:56:33.411001 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:56:33.411011 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:56:33.411021 | orchestrator | 2026-04-05 00:56:33.411030 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for haproxy to listen on VIP] ************* 2026-04-05 00:56:33.411040 | orchestrator | Sunday 05 April 2026 00:56:28 +0000 (0:00:00.332) 0:06:57.621 ********** 2026-04-05 00:56:33.411050 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:56:33.411060 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:56:33.411070 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:56:33.411079 | orchestrator | 2026-04-05 00:56:33.411090 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for proxysql to listen on VIP] ************ 2026-04-05 00:56:33.411096 | orchestrator | Sunday 05 April 2026 00:56:29 +0000 (0:00:00.855) 0:06:58.477 ********** 2026-04-05 00:56:33.411101 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:56:33.411107 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:56:33.411120 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:56:33.411125 | orchestrator | 2026-04-05 00:56:33.411131 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-05 00:56:33.411136 | orchestrator | testbed-node-0 : ok=123  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2026-04-05 00:56:33.411142 | orchestrator | testbed-node-1 : ok=122  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2026-04-05 00:56:33.411148 | orchestrator | testbed-node-2 : ok=122  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2026-04-05 00:56:33.411153 | orchestrator | 2026-04-05 00:56:33.411158 | orchestrator | 2026-04-05 00:56:33.411164 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-05 00:56:33.411169 | orchestrator | Sunday 05 April 2026 00:56:30 +0000 (0:00:00.798) 0:06:59.276 ********** 2026-04-05 00:56:33.411175 | orchestrator | =============================================================================== 2026-04-05 00:56:33.411180 | orchestrator | loadbalancer : Start backup proxysql container ------------------------- 15.13s 2026-04-05 00:56:33.411218 | orchestrator | loadbalancer : Start backup keepalived container ------------------------ 9.64s 2026-04-05 00:56:33.411224 | orchestrator | haproxy-config : Copying over skyline haproxy config -------------------- 5.84s 2026-04-05 00:56:33.411229 | orchestrator | haproxy-config : Copying over aodh haproxy config ----------------------- 5.71s 2026-04-05 00:56:33.411234 | orchestrator | haproxy-config : Copying over opensearch haproxy config ----------------- 5.49s 2026-04-05 00:56:33.411240 | orchestrator | loadbalancer : Start backup haproxy container --------------------------- 5.41s 2026-04-05 00:56:33.411245 | orchestrator | haproxy-config : Copying over nova haproxy config ----------------------- 5.31s 2026-04-05 00:56:33.411251 | orchestrator | haproxy-config : Copying over designate haproxy config ------------------ 5.27s 2026-04-05 00:56:33.411262 | orchestrator | haproxy-config : Copying over ceph-rgw haproxy config ------------------- 5.03s 2026-04-05 00:56:33.411268 | orchestrator | haproxy-config : Copying over barbican haproxy config ------------------- 4.99s 2026-04-05 00:56:33.411273 | orchestrator | haproxy-config : Copying over prometheus haproxy config ----------------- 4.84s 2026-04-05 00:56:33.411279 | orchestrator | haproxy-config : Add configuration for ceph-rgw when using single external frontend --- 4.82s 2026-04-05 00:56:33.411284 | orchestrator | loadbalancer : Copying over proxysql config ----------------------------- 4.78s 2026-04-05 00:56:33.411290 | orchestrator | haproxy-config : Copying over glance haproxy config --------------------- 4.65s 2026-04-05 00:56:33.411295 | orchestrator | sysctl : Setting sysctl values ------------------------------------------ 4.57s 2026-04-05 00:56:33.411300 | orchestrator | haproxy-config : Copying over cinder haproxy config --------------------- 4.52s 2026-04-05 00:56:33.411306 | orchestrator | haproxy-config : Copying over manila haproxy config --------------------- 4.52s 2026-04-05 00:56:33.411311 | orchestrator | loadbalancer : Copying checks for services which are enabled ------------ 4.46s 2026-04-05 00:56:33.411316 | orchestrator | haproxy-config : Copying over magnum haproxy config --------------------- 4.18s 2026-04-05 00:56:33.411322 | orchestrator | loadbalancer : Wait for backup haproxy to start ------------------------- 4.16s 2026-04-05 00:56:33.411327 | orchestrator | 2026-04-05 00:56:33 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:56:36.436810 | orchestrator | 2026-04-05 00:56:36 | INFO  | Task bcbda999-be68-4903-8396-551ca0f1c657 is in state STARTED 2026-04-05 00:56:36.438488 | orchestrator | 2026-04-05 00:56:36 | INFO  | Task b9f9fe4a-095a-46c2-a3d1-d49948e77915 is in state STARTED 2026-04-05 00:56:36.439900 | orchestrator | 2026-04-05 00:56:36 | INFO  | Task 9daa912a-da73-4379-86f2-1d23972e1467 is in state STARTED 2026-04-05 00:56:36.439954 | orchestrator | 2026-04-05 00:56:36 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:56:39.472373 | orchestrator | 2026-04-05 00:56:39 | INFO  | Task bcbda999-be68-4903-8396-551ca0f1c657 is in state STARTED 2026-04-05 00:56:39.472517 | orchestrator | 2026-04-05 00:56:39 | INFO  | Task b9f9fe4a-095a-46c2-a3d1-d49948e77915 is in state STARTED 2026-04-05 00:56:39.472545 | orchestrator | 2026-04-05 00:56:39 | INFO  | Task 9daa912a-da73-4379-86f2-1d23972e1467 is in state STARTED 2026-04-05 00:56:39.472564 | orchestrator | 2026-04-05 00:56:39 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:56:42.529690 | orchestrator | 2026-04-05 00:56:42 | INFO  | Task bcbda999-be68-4903-8396-551ca0f1c657 is in state STARTED 2026-04-05 00:56:42.531385 | orchestrator | 2026-04-05 00:56:42 | INFO  | Task b9f9fe4a-095a-46c2-a3d1-d49948e77915 is in state STARTED 2026-04-05 00:56:42.532112 | orchestrator | 2026-04-05 00:56:42 | INFO  | Task 9daa912a-da73-4379-86f2-1d23972e1467 is in state STARTED 2026-04-05 00:56:42.532156 | orchestrator | 2026-04-05 00:56:42 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:56:45.582797 | orchestrator | 2026-04-05 00:56:45 | INFO  | Task bcbda999-be68-4903-8396-551ca0f1c657 is in state STARTED 2026-04-05 00:56:45.583623 | orchestrator | 2026-04-05 00:56:45 | INFO  | Task b9f9fe4a-095a-46c2-a3d1-d49948e77915 is in state STARTED 2026-04-05 00:56:45.585005 | orchestrator | 2026-04-05 00:56:45 | INFO  | Task 9daa912a-da73-4379-86f2-1d23972e1467 is in state STARTED 2026-04-05 00:56:45.585087 | orchestrator | 2026-04-05 00:56:45 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:56:48.623564 | orchestrator | 2026-04-05 00:56:48 | INFO  | Task bcbda999-be68-4903-8396-551ca0f1c657 is in state STARTED 2026-04-05 00:56:48.623672 | orchestrator | 2026-04-05 00:56:48 | INFO  | Task b9f9fe4a-095a-46c2-a3d1-d49948e77915 is in state STARTED 2026-04-05 00:56:48.624121 | orchestrator | 2026-04-05 00:56:48 | INFO  | Task 9daa912a-da73-4379-86f2-1d23972e1467 is in state STARTED 2026-04-05 00:56:48.624155 | orchestrator | 2026-04-05 00:56:48 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:56:51.658608 | orchestrator | 2026-04-05 00:56:51 | INFO  | Task bcbda999-be68-4903-8396-551ca0f1c657 is in state STARTED 2026-04-05 00:56:51.661756 | orchestrator | 2026-04-05 00:56:51 | INFO  | Task b9f9fe4a-095a-46c2-a3d1-d49948e77915 is in state STARTED 2026-04-05 00:56:51.664541 | orchestrator | 2026-04-05 00:56:51 | INFO  | Task 9daa912a-da73-4379-86f2-1d23972e1467 is in state STARTED 2026-04-05 00:56:51.664593 | orchestrator | 2026-04-05 00:56:51 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:56:54.732706 | orchestrator | 2026-04-05 00:56:54 | INFO  | Task bcbda999-be68-4903-8396-551ca0f1c657 is in state STARTED 2026-04-05 00:56:54.733095 | orchestrator | 2026-04-05 00:56:54 | INFO  | Task b9f9fe4a-095a-46c2-a3d1-d49948e77915 is in state STARTED 2026-04-05 00:56:54.734193 | orchestrator | 2026-04-05 00:56:54 | INFO  | Task 9daa912a-da73-4379-86f2-1d23972e1467 is in state STARTED 2026-04-05 00:56:54.734251 | orchestrator | 2026-04-05 00:56:54 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:56:57.770562 | orchestrator | 2026-04-05 00:56:57 | INFO  | Task bcbda999-be68-4903-8396-551ca0f1c657 is in state STARTED 2026-04-05 00:56:57.771810 | orchestrator | 2026-04-05 00:56:57 | INFO  | Task b9f9fe4a-095a-46c2-a3d1-d49948e77915 is in state STARTED 2026-04-05 00:56:57.771941 | orchestrator | 2026-04-05 00:56:57 | INFO  | Task 9daa912a-da73-4379-86f2-1d23972e1467 is in state STARTED 2026-04-05 00:56:57.772110 | orchestrator | 2026-04-05 00:56:57 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:57:00.825918 | orchestrator | 2026-04-05 00:57:00 | INFO  | Task bcbda999-be68-4903-8396-551ca0f1c657 is in state STARTED 2026-04-05 00:57:00.826147 | orchestrator | 2026-04-05 00:57:00 | INFO  | Task b9f9fe4a-095a-46c2-a3d1-d49948e77915 is in state STARTED 2026-04-05 00:57:00.826438 | orchestrator | 2026-04-05 00:57:00 | INFO  | Task 9daa912a-da73-4379-86f2-1d23972e1467 is in state STARTED 2026-04-05 00:57:00.826464 | orchestrator | 2026-04-05 00:57:00 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:57:03.950222 | orchestrator | 2026-04-05 00:57:03 | INFO  | Task bcbda999-be68-4903-8396-551ca0f1c657 is in state STARTED 2026-04-05 00:57:03.950322 | orchestrator | 2026-04-05 00:57:03 | INFO  | Task b9f9fe4a-095a-46c2-a3d1-d49948e77915 is in state STARTED 2026-04-05 00:57:03.950336 | orchestrator | 2026-04-05 00:57:03 | INFO  | Task 9daa912a-da73-4379-86f2-1d23972e1467 is in state STARTED 2026-04-05 00:57:03.950348 | orchestrator | 2026-04-05 00:57:03 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:57:06.931891 | orchestrator | 2026-04-05 00:57:06 | INFO  | Task bcbda999-be68-4903-8396-551ca0f1c657 is in state STARTED 2026-04-05 00:57:06.933434 | orchestrator | 2026-04-05 00:57:06 | INFO  | Task b9f9fe4a-095a-46c2-a3d1-d49948e77915 is in state STARTED 2026-04-05 00:57:06.934360 | orchestrator | 2026-04-05 00:57:06 | INFO  | Task 9daa912a-da73-4379-86f2-1d23972e1467 is in state STARTED 2026-04-05 00:57:06.934408 | orchestrator | 2026-04-05 00:57:06 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:57:09.982367 | orchestrator | 2026-04-05 00:57:09 | INFO  | Task bcbda999-be68-4903-8396-551ca0f1c657 is in state STARTED 2026-04-05 00:57:09.984635 | orchestrator | 2026-04-05 00:57:09 | INFO  | Task b9f9fe4a-095a-46c2-a3d1-d49948e77915 is in state STARTED 2026-04-05 00:57:09.987362 | orchestrator | 2026-04-05 00:57:09 | INFO  | Task 9daa912a-da73-4379-86f2-1d23972e1467 is in state STARTED 2026-04-05 00:57:09.987840 | orchestrator | 2026-04-05 00:57:09 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:57:13.036850 | orchestrator | 2026-04-05 00:57:13 | INFO  | Task bcbda999-be68-4903-8396-551ca0f1c657 is in state STARTED 2026-04-05 00:57:13.040339 | orchestrator | 2026-04-05 00:57:13 | INFO  | Task b9f9fe4a-095a-46c2-a3d1-d49948e77915 is in state STARTED 2026-04-05 00:57:13.040419 | orchestrator | 2026-04-05 00:57:13 | INFO  | Task 9daa912a-da73-4379-86f2-1d23972e1467 is in state STARTED 2026-04-05 00:57:13.040431 | orchestrator | 2026-04-05 00:57:13 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:57:16.092362 | orchestrator | 2026-04-05 00:57:16 | INFO  | Task bcbda999-be68-4903-8396-551ca0f1c657 is in state STARTED 2026-04-05 00:57:16.093909 | orchestrator | 2026-04-05 00:57:16 | INFO  | Task b9f9fe4a-095a-46c2-a3d1-d49948e77915 is in state STARTED 2026-04-05 00:57:16.095704 | orchestrator | 2026-04-05 00:57:16 | INFO  | Task 9daa912a-da73-4379-86f2-1d23972e1467 is in state STARTED 2026-04-05 00:57:16.096006 | orchestrator | 2026-04-05 00:57:16 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:57:19.148291 | orchestrator | 2026-04-05 00:57:19 | INFO  | Task bcbda999-be68-4903-8396-551ca0f1c657 is in state STARTED 2026-04-05 00:57:19.149543 | orchestrator | 2026-04-05 00:57:19 | INFO  | Task b9f9fe4a-095a-46c2-a3d1-d49948e77915 is in state STARTED 2026-04-05 00:57:19.151496 | orchestrator | 2026-04-05 00:57:19 | INFO  | Task 9daa912a-da73-4379-86f2-1d23972e1467 is in state STARTED 2026-04-05 00:57:19.151702 | orchestrator | 2026-04-05 00:57:19 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:57:22.194314 | orchestrator | 2026-04-05 00:57:22 | INFO  | Task bcbda999-be68-4903-8396-551ca0f1c657 is in state STARTED 2026-04-05 00:57:22.197752 | orchestrator | 2026-04-05 00:57:22 | INFO  | Task b9f9fe4a-095a-46c2-a3d1-d49948e77915 is in state STARTED 2026-04-05 00:57:22.201131 | orchestrator | 2026-04-05 00:57:22 | INFO  | Task 9daa912a-da73-4379-86f2-1d23972e1467 is in state STARTED 2026-04-05 00:57:22.201225 | orchestrator | 2026-04-05 00:57:22 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:57:25.238893 | orchestrator | 2026-04-05 00:57:25 | INFO  | Task bcbda999-be68-4903-8396-551ca0f1c657 is in state STARTED 2026-04-05 00:57:25.239539 | orchestrator | 2026-04-05 00:57:25 | INFO  | Task b9f9fe4a-095a-46c2-a3d1-d49948e77915 is in state STARTED 2026-04-05 00:57:25.240023 | orchestrator | 2026-04-05 00:57:25 | INFO  | Task 9daa912a-da73-4379-86f2-1d23972e1467 is in state STARTED 2026-04-05 00:57:25.240057 | orchestrator | 2026-04-05 00:57:25 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:57:28.282451 | orchestrator | 2026-04-05 00:57:28 | INFO  | Task bcbda999-be68-4903-8396-551ca0f1c657 is in state STARTED 2026-04-05 00:57:28.287406 | orchestrator | 2026-04-05 00:57:28 | INFO  | Task b9f9fe4a-095a-46c2-a3d1-d49948e77915 is in state STARTED 2026-04-05 00:57:28.289206 | orchestrator | 2026-04-05 00:57:28 | INFO  | Task 9daa912a-da73-4379-86f2-1d23972e1467 is in state STARTED 2026-04-05 00:57:28.289254 | orchestrator | 2026-04-05 00:57:28 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:57:31.351626 | orchestrator | 2026-04-05 00:57:31 | INFO  | Task bcbda999-be68-4903-8396-551ca0f1c657 is in state STARTED 2026-04-05 00:57:31.356493 | orchestrator | 2026-04-05 00:57:31 | INFO  | Task b9f9fe4a-095a-46c2-a3d1-d49948e77915 is in state STARTED 2026-04-05 00:57:31.358122 | orchestrator | 2026-04-05 00:57:31 | INFO  | Task 9daa912a-da73-4379-86f2-1d23972e1467 is in state STARTED 2026-04-05 00:57:31.358232 | orchestrator | 2026-04-05 00:57:31 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:57:34.398283 | orchestrator | 2026-04-05 00:57:34 | INFO  | Task bcbda999-be68-4903-8396-551ca0f1c657 is in state STARTED 2026-04-05 00:57:34.399293 | orchestrator | 2026-04-05 00:57:34 | INFO  | Task b9f9fe4a-095a-46c2-a3d1-d49948e77915 is in state STARTED 2026-04-05 00:57:34.401111 | orchestrator | 2026-04-05 00:57:34 | INFO  | Task 9daa912a-da73-4379-86f2-1d23972e1467 is in state STARTED 2026-04-05 00:57:34.401149 | orchestrator | 2026-04-05 00:57:34 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:57:37.447507 | orchestrator | 2026-04-05 00:57:37 | INFO  | Task bcbda999-be68-4903-8396-551ca0f1c657 is in state STARTED 2026-04-05 00:57:37.448718 | orchestrator | 2026-04-05 00:57:37 | INFO  | Task b9f9fe4a-095a-46c2-a3d1-d49948e77915 is in state STARTED 2026-04-05 00:57:37.450695 | orchestrator | 2026-04-05 00:57:37 | INFO  | Task 9daa912a-da73-4379-86f2-1d23972e1467 is in state STARTED 2026-04-05 00:57:37.450768 | orchestrator | 2026-04-05 00:57:37 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:57:40.505699 | orchestrator | 2026-04-05 00:57:40 | INFO  | Task bcbda999-be68-4903-8396-551ca0f1c657 is in state STARTED 2026-04-05 00:57:40.509449 | orchestrator | 2026-04-05 00:57:40 | INFO  | Task b9f9fe4a-095a-46c2-a3d1-d49948e77915 is in state STARTED 2026-04-05 00:57:40.510211 | orchestrator | 2026-04-05 00:57:40 | INFO  | Task 9daa912a-da73-4379-86f2-1d23972e1467 is in state STARTED 2026-04-05 00:57:40.510289 | orchestrator | 2026-04-05 00:57:40 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:57:43.558237 | orchestrator | 2026-04-05 00:57:43 | INFO  | Task bcbda999-be68-4903-8396-551ca0f1c657 is in state STARTED 2026-04-05 00:57:43.566730 | orchestrator | 2026-04-05 00:57:43 | INFO  | Task b9f9fe4a-095a-46c2-a3d1-d49948e77915 is in state STARTED 2026-04-05 00:57:43.569597 | orchestrator | 2026-04-05 00:57:43 | INFO  | Task 9daa912a-da73-4379-86f2-1d23972e1467 is in state STARTED 2026-04-05 00:57:43.570108 | orchestrator | 2026-04-05 00:57:43 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:57:46.631245 | orchestrator | 2026-04-05 00:57:46 | INFO  | Task bcbda999-be68-4903-8396-551ca0f1c657 is in state STARTED 2026-04-05 00:57:46.633956 | orchestrator | 2026-04-05 00:57:46 | INFO  | Task b9f9fe4a-095a-46c2-a3d1-d49948e77915 is in state STARTED 2026-04-05 00:57:46.636159 | orchestrator | 2026-04-05 00:57:46 | INFO  | Task 9daa912a-da73-4379-86f2-1d23972e1467 is in state STARTED 2026-04-05 00:57:46.636231 | orchestrator | 2026-04-05 00:57:46 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:57:49.689353 | orchestrator | 2026-04-05 00:57:49 | INFO  | Task bcbda999-be68-4903-8396-551ca0f1c657 is in state STARTED 2026-04-05 00:57:49.692007 | orchestrator | 2026-04-05 00:57:49 | INFO  | Task b9f9fe4a-095a-46c2-a3d1-d49948e77915 is in state STARTED 2026-04-05 00:57:49.694665 | orchestrator | 2026-04-05 00:57:49 | INFO  | Task 9daa912a-da73-4379-86f2-1d23972e1467 is in state STARTED 2026-04-05 00:57:49.695016 | orchestrator | 2026-04-05 00:57:49 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:57:52.740911 | orchestrator | 2026-04-05 00:57:52 | INFO  | Task bcbda999-be68-4903-8396-551ca0f1c657 is in state STARTED 2026-04-05 00:57:52.742358 | orchestrator | 2026-04-05 00:57:52 | INFO  | Task b9f9fe4a-095a-46c2-a3d1-d49948e77915 is in state STARTED 2026-04-05 00:57:52.744083 | orchestrator | 2026-04-05 00:57:52 | INFO  | Task 9daa912a-da73-4379-86f2-1d23972e1467 is in state STARTED 2026-04-05 00:57:52.744109 | orchestrator | 2026-04-05 00:57:52 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:57:55.780849 | orchestrator | 2026-04-05 00:57:55 | INFO  | Task bcbda999-be68-4903-8396-551ca0f1c657 is in state STARTED 2026-04-05 00:57:55.781448 | orchestrator | 2026-04-05 00:57:55 | INFO  | Task b9f9fe4a-095a-46c2-a3d1-d49948e77915 is in state STARTED 2026-04-05 00:57:55.782774 | orchestrator | 2026-04-05 00:57:55 | INFO  | Task 9daa912a-da73-4379-86f2-1d23972e1467 is in state STARTED 2026-04-05 00:57:55.782811 | orchestrator | 2026-04-05 00:57:55 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:57:58.837685 | orchestrator | 2026-04-05 00:57:58 | INFO  | Task bcbda999-be68-4903-8396-551ca0f1c657 is in state STARTED 2026-04-05 00:57:58.839452 | orchestrator | 2026-04-05 00:57:58 | INFO  | Task b9f9fe4a-095a-46c2-a3d1-d49948e77915 is in state STARTED 2026-04-05 00:57:58.841656 | orchestrator | 2026-04-05 00:57:58 | INFO  | Task 9daa912a-da73-4379-86f2-1d23972e1467 is in state STARTED 2026-04-05 00:57:58.841693 | orchestrator | 2026-04-05 00:57:58 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:58:01.893858 | orchestrator | 2026-04-05 00:58:01 | INFO  | Task bcbda999-be68-4903-8396-551ca0f1c657 is in state STARTED 2026-04-05 00:58:01.895808 | orchestrator | 2026-04-05 00:58:01 | INFO  | Task b9f9fe4a-095a-46c2-a3d1-d49948e77915 is in state STARTED 2026-04-05 00:58:01.898757 | orchestrator | 2026-04-05 00:58:01 | INFO  | Task 9daa912a-da73-4379-86f2-1d23972e1467 is in state STARTED 2026-04-05 00:58:01.898818 | orchestrator | 2026-04-05 00:58:01 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:58:04.958423 | orchestrator | 2026-04-05 00:58:04 | INFO  | Task bcbda999-be68-4903-8396-551ca0f1c657 is in state STARTED 2026-04-05 00:58:04.960836 | orchestrator | 2026-04-05 00:58:04 | INFO  | Task b9f9fe4a-095a-46c2-a3d1-d49948e77915 is in state STARTED 2026-04-05 00:58:04.963160 | orchestrator | 2026-04-05 00:58:04 | INFO  | Task 9daa912a-da73-4379-86f2-1d23972e1467 is in state STARTED 2026-04-05 00:58:04.963260 | orchestrator | 2026-04-05 00:58:04 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:58:08.010807 | orchestrator | 2026-04-05 00:58:08 | INFO  | Task bcbda999-be68-4903-8396-551ca0f1c657 is in state STARTED 2026-04-05 00:58:08.011197 | orchestrator | 2026-04-05 00:58:08 | INFO  | Task b9f9fe4a-095a-46c2-a3d1-d49948e77915 is in state STARTED 2026-04-05 00:58:08.013600 | orchestrator | 2026-04-05 00:58:08 | INFO  | Task 9daa912a-da73-4379-86f2-1d23972e1467 is in state STARTED 2026-04-05 00:58:08.013657 | orchestrator | 2026-04-05 00:58:08 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:58:11.066430 | orchestrator | 2026-04-05 00:58:11 | INFO  | Task bcbda999-be68-4903-8396-551ca0f1c657 is in state SUCCESS 2026-04-05 00:58:11.067249 | orchestrator | 2026-04-05 00:58:11.067282 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-04-05 00:58:11.067296 | orchestrator | 2.16.14 2026-04-05 00:58:11.067309 | orchestrator | 2026-04-05 00:58:11.067320 | orchestrator | PLAY [Prepare deployment of Ceph services] ************************************* 2026-04-05 00:58:11.067332 | orchestrator | 2026-04-05 00:58:11.067343 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-04-05 00:58:11.067354 | orchestrator | Sunday 05 April 2026 00:46:49 +0000 (0:00:00.824) 0:00:00.824 ********** 2026-04-05 00:58:11.067367 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 00:58:11.067379 | orchestrator | 2026-04-05 00:58:11.067390 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-04-05 00:58:11.067401 | orchestrator | Sunday 05 April 2026 00:46:50 +0000 (0:00:01.221) 0:00:02.046 ********** 2026-04-05 00:58:11.067412 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:58:11.067423 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:58:11.067434 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:58:11.067444 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:58:11.067478 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:58:11.067505 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:58:11.067524 | orchestrator | 2026-04-05 00:58:11.068019 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-04-05 00:58:11.068044 | orchestrator | Sunday 05 April 2026 00:46:51 +0000 (0:00:01.581) 0:00:03.628 ********** 2026-04-05 00:58:11.068058 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:58:11.068072 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:58:11.068084 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:58:11.068097 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:58:11.068139 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:58:11.068153 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:58:11.068166 | orchestrator | 2026-04-05 00:58:11.068177 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-04-05 00:58:11.068188 | orchestrator | Sunday 05 April 2026 00:46:52 +0000 (0:00:00.825) 0:00:04.453 ********** 2026-04-05 00:58:11.068199 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:58:11.068210 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:58:11.068229 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:58:11.068974 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:58:11.069003 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:58:11.069014 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:58:11.069025 | orchestrator | 2026-04-05 00:58:11.069036 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-04-05 00:58:11.069047 | orchestrator | Sunday 05 April 2026 00:46:53 +0000 (0:00:01.319) 0:00:05.773 ********** 2026-04-05 00:58:11.069058 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:58:11.069069 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:58:11.069080 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:58:11.069090 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:58:11.069850 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:58:11.070304 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:58:11.070344 | orchestrator | 2026-04-05 00:58:11.070357 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-04-05 00:58:11.070371 | orchestrator | Sunday 05 April 2026 00:46:54 +0000 (0:00:00.815) 0:00:06.589 ********** 2026-04-05 00:58:11.070384 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:58:11.070396 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:58:11.070424 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:58:11.070437 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:58:11.070450 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:58:11.070462 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:58:11.070475 | orchestrator | 2026-04-05 00:58:11.070487 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-04-05 00:58:11.070501 | orchestrator | Sunday 05 April 2026 00:46:55 +0000 (0:00:01.118) 0:00:07.707 ********** 2026-04-05 00:58:11.070519 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:58:11.070538 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:58:11.070556 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:58:11.070573 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:58:11.070590 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:58:11.070608 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:58:11.070626 | orchestrator | 2026-04-05 00:58:11.070644 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-04-05 00:58:11.070680 | orchestrator | Sunday 05 April 2026 00:46:57 +0000 (0:00:01.249) 0:00:08.956 ********** 2026-04-05 00:58:11.070698 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:58:11.070717 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:58:11.070734 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:58:11.071425 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:58:11.071457 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:58:11.071469 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:58:11.071480 | orchestrator | 2026-04-05 00:58:11.071491 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-04-05 00:58:11.071502 | orchestrator | Sunday 05 April 2026 00:46:58 +0000 (0:00:00.858) 0:00:09.815 ********** 2026-04-05 00:58:11.071527 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:58:11.071538 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:58:11.071552 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:58:11.072211 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:58:11.072246 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:58:11.072257 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:58:11.072268 | orchestrator | 2026-04-05 00:58:11.072279 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-04-05 00:58:11.072291 | orchestrator | Sunday 05 April 2026 00:46:59 +0000 (0:00:01.213) 0:00:11.029 ********** 2026-04-05 00:58:11.072302 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-05 00:58:11.072363 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-05 00:58:11.072981 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-05 00:58:11.073020 | orchestrator | 2026-04-05 00:58:11.073039 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-04-05 00:58:11.073058 | orchestrator | Sunday 05 April 2026 00:47:00 +0000 (0:00:00.910) 0:00:11.939 ********** 2026-04-05 00:58:11.073076 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:58:11.073094 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:58:11.073188 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:58:11.073276 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:58:11.073317 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:58:11.073360 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:58:11.073380 | orchestrator | 2026-04-05 00:58:11.073461 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-04-05 00:58:11.075879 | orchestrator | Sunday 05 April 2026 00:47:01 +0000 (0:00:01.809) 0:00:13.748 ********** 2026-04-05 00:58:11.075893 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-05 00:58:11.075921 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-05 00:58:11.075929 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-05 00:58:11.075937 | orchestrator | 2026-04-05 00:58:11.075946 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-04-05 00:58:11.075954 | orchestrator | Sunday 05 April 2026 00:47:04 +0000 (0:00:02.894) 0:00:16.643 ********** 2026-04-05 00:58:11.075963 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-04-05 00:58:11.075971 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-04-05 00:58:11.075979 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-04-05 00:58:11.075987 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:58:11.075995 | orchestrator | 2026-04-05 00:58:11.076003 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-04-05 00:58:11.076010 | orchestrator | Sunday 05 April 2026 00:47:05 +0000 (0:00:00.377) 0:00:17.021 ********** 2026-04-05 00:58:11.076020 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-04-05 00:58:11.076032 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-04-05 00:58:11.076040 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-04-05 00:58:11.076048 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:58:11.076056 | orchestrator | 2026-04-05 00:58:11.076063 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-04-05 00:58:11.076071 | orchestrator | Sunday 05 April 2026 00:47:06 +0000 (0:00:00.763) 0:00:17.784 ********** 2026-04-05 00:58:11.076089 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-04-05 00:58:11.076126 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-04-05 00:58:11.076137 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-04-05 00:58:11.076145 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:58:11.076153 | orchestrator | 2026-04-05 00:58:11.076161 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-04-05 00:58:11.076169 | orchestrator | Sunday 05 April 2026 00:47:06 +0000 (0:00:00.190) 0:00:17.975 ********** 2026-04-05 00:58:11.076650 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-04-05 00:47:02.968974', 'end': '2026-04-05 00:47:03.077486', 'delta': '0:00:00.108512', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-04-05 00:58:11.076692 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-04-05 00:47:03.754423', 'end': '2026-04-05 00:47:03.866635', 'delta': '0:00:00.112212', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-04-05 00:58:11.076701 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-04-05 00:47:04.563644', 'end': '2026-04-05 00:47:04.691979', 'delta': '0:00:00.128335', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-04-05 00:58:11.076709 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:58:11.076718 | orchestrator | 2026-04-05 00:58:11.076726 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-04-05 00:58:11.076734 | orchestrator | Sunday 05 April 2026 00:47:07 +0000 (0:00:00.852) 0:00:18.828 ********** 2026-04-05 00:58:11.076741 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:58:11.076750 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:58:11.076758 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:58:11.076766 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:58:11.076773 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:58:11.076781 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:58:11.076789 | orchestrator | 2026-04-05 00:58:11.076797 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-04-05 00:58:11.076804 | orchestrator | Sunday 05 April 2026 00:47:10 +0000 (0:00:03.134) 0:00:21.963 ********** 2026-04-05 00:58:11.076819 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-04-05 00:58:11.076828 | orchestrator | 2026-04-05 00:58:11.076835 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-04-05 00:58:11.076843 | orchestrator | Sunday 05 April 2026 00:47:11 +0000 (0:00:00.864) 0:00:22.827 ********** 2026-04-05 00:58:11.076851 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:58:11.076859 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:58:11.076866 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:58:11.076876 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:58:11.076885 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:58:11.076894 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:58:11.076903 | orchestrator | 2026-04-05 00:58:11.076913 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-04-05 00:58:11.076922 | orchestrator | Sunday 05 April 2026 00:47:12 +0000 (0:00:01.450) 0:00:24.278 ********** 2026-04-05 00:58:11.076932 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:58:11.076947 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:58:11.076957 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:58:11.076966 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:58:11.076975 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:58:11.076985 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:58:11.076994 | orchestrator | 2026-04-05 00:58:11.077004 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-04-05 00:58:11.077013 | orchestrator | Sunday 05 April 2026 00:47:14 +0000 (0:00:01.579) 0:00:25.857 ********** 2026-04-05 00:58:11.077023 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:58:11.077032 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:58:11.077041 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:58:11.077051 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:58:11.077060 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:58:11.077069 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:58:11.077079 | orchestrator | 2026-04-05 00:58:11.077089 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-04-05 00:58:11.077097 | orchestrator | Sunday 05 April 2026 00:47:15 +0000 (0:00:01.160) 0:00:27.018 ********** 2026-04-05 00:58:11.077130 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:58:11.077138 | orchestrator | 2026-04-05 00:58:11.077146 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-04-05 00:58:11.077154 | orchestrator | Sunday 05 April 2026 00:47:15 +0000 (0:00:00.119) 0:00:27.138 ********** 2026-04-05 00:58:11.077164 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:58:11.077177 | orchestrator | 2026-04-05 00:58:11.077187 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-04-05 00:58:11.077196 | orchestrator | Sunday 05 April 2026 00:47:15 +0000 (0:00:00.394) 0:00:27.532 ********** 2026-04-05 00:58:11.077203 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:58:11.077211 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:58:11.077219 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:58:11.077502 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:58:11.077519 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:58:11.077527 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:58:11.077535 | orchestrator | 2026-04-05 00:58:11.077543 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-04-05 00:58:11.077551 | orchestrator | Sunday 05 April 2026 00:47:16 +0000 (0:00:00.990) 0:00:28.523 ********** 2026-04-05 00:58:11.077559 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:58:11.077566 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:58:11.077576 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:58:11.077589 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:58:11.077606 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:58:11.077624 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:58:11.077636 | orchestrator | 2026-04-05 00:58:11.077649 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-04-05 00:58:11.077662 | orchestrator | Sunday 05 April 2026 00:47:18 +0000 (0:00:01.404) 0:00:29.927 ********** 2026-04-05 00:58:11.077674 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:58:11.077687 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:58:11.077699 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:58:11.077713 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:58:11.077726 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:58:11.077738 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:58:11.077751 | orchestrator | 2026-04-05 00:58:11.077764 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-04-05 00:58:11.077776 | orchestrator | Sunday 05 April 2026 00:47:19 +0000 (0:00:01.403) 0:00:31.331 ********** 2026-04-05 00:58:11.077790 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:58:11.077803 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:58:11.077817 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:58:11.077830 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:58:11.077857 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:58:11.077872 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:58:11.077886 | orchestrator | 2026-04-05 00:58:11.077900 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-04-05 00:58:11.077914 | orchestrator | Sunday 05 April 2026 00:47:20 +0000 (0:00:01.315) 0:00:32.647 ********** 2026-04-05 00:58:11.077927 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:58:11.077941 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:58:11.077955 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:58:11.077969 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:58:11.077983 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:58:11.077996 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:58:11.078009 | orchestrator | 2026-04-05 00:58:11.078062 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-04-05 00:58:11.078078 | orchestrator | Sunday 05 April 2026 00:47:21 +0000 (0:00:00.669) 0:00:33.317 ********** 2026-04-05 00:58:11.078093 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:58:11.078176 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:58:11.078191 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:58:11.078206 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:58:11.078220 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:58:11.078233 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:58:11.078247 | orchestrator | 2026-04-05 00:58:11.078262 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-04-05 00:58:11.078286 | orchestrator | Sunday 05 April 2026 00:47:22 +0000 (0:00:01.138) 0:00:34.455 ********** 2026-04-05 00:58:11.078301 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:58:11.078314 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:58:11.078328 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:58:11.078342 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:58:11.078356 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:58:11.078370 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:58:11.078383 | orchestrator | 2026-04-05 00:58:11.078396 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-04-05 00:58:11.078410 | orchestrator | Sunday 05 April 2026 00:47:24 +0000 (0:00:01.390) 0:00:35.846 ********** 2026-04-05 00:58:11.078427 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--9657aa76--f30a--575f--81fa--dc230eadde03-osd--block--9657aa76--f30a--575f--81fa--dc230eadde03', 'dm-uuid-LVM-8E1xuPLEYx1uTwydDUNwPMLUgzpgnl2IeAYMIzK2AO6YtTwcvavu13HOl75B9Evz'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-04-05 00:58:11.078444 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--8a27db0d--e52c--5340--bfad--66c075ab1c61-osd--block--8a27db0d--e52c--5340--bfad--66c075ab1c61', 'dm-uuid-LVM-maRrlPjNmQL0H9aadu5k71QFHeXfjfdEipyIlSH5OXr1U4BAJhfLJgpSdP33B0eG'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-04-05 00:58:11.078577 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-05 00:58:11.078611 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-05 00:58:11.078625 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-05 00:58:11.078637 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-05 00:58:11.078652 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--84662fb7--c7ec--5f43--83c1--849532919194-osd--block--84662fb7--c7ec--5f43--83c1--849532919194', 'dm-uuid-LVM-7EWndWI44TagQTqMBy1Pv9rnP4tweZpxjHYUBR4fuE24TPDe2OzsOGLQTaEDcelq'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-04-05 00:58:11.078666 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-05 00:58:11.078679 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--df39e39b--9449--5ecb--9afa--151663e06960-osd--block--df39e39b--9449--5ecb--9afa--151663e06960', 'dm-uuid-LVM-4HnV0lqPVUvHugf1jZmoUAfymWk8v99yHxbGxoLLfHrZ8usKW78V8J8BL2Nt20SL'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-04-05 00:58:11.078691 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-05 00:58:11.078702 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-05 00:58:11.078792 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-05 00:58:11.078991 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-05 00:58:11.079018 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-05 00:58:11.079026 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-05 00:58:11.079033 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-05 00:58:11.079043 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-05 00:58:11.079050 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-05 00:58:11.079156 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6cf61d5b-3b59-4520-9cc2-8285b407910f', 'scsi-SQEMU_QEMU_HARDDISK_6cf61d5b-3b59-4520-9cc2-8285b407910f'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6cf61d5b-3b59-4520-9cc2-8285b407910f-part1', 'scsi-SQEMU_QEMU_HARDDISK_6cf61d5b-3b59-4520-9cc2-8285b407910f-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6cf61d5b-3b59-4520-9cc2-8285b407910f-part14', 'scsi-SQEMU_QEMU_HARDDISK_6cf61d5b-3b59-4520-9cc2-8285b407910f-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6cf61d5b-3b59-4520-9cc2-8285b407910f-part15', 'scsi-SQEMU_QEMU_HARDDISK_6cf61d5b-3b59-4520-9cc2-8285b407910f-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6cf61d5b-3b59-4520-9cc2-8285b407910f-part16', 'scsi-SQEMU_QEMU_HARDDISK_6cf61d5b-3b59-4520-9cc2-8285b407910f-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-05 00:58:11.079180 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-05 00:58:11.079187 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'holders': ['ceph--9657aa76--f30a--575f--81fa--dc230eadde03-osd--block--9657aa76--f30a--575f--81fa--dc230eadde03'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-RpfwZn-HTk4-RnlD-ne4J-uLT2-7pJ1-8ZtVeR', 'scsi-0QEMU_QEMU_HARDDISK_7e73ac44-76fe-4853-8c7e-76a35261b68e', 'scsi-SQEMU_QEMU_HARDDISK_7e73ac44-76fe-4853-8c7e-76a35261b68e'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-05 00:58:11.079199 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-05 00:58:11.079206 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'holders': ['ceph--8a27db0d--e52c--5340--bfad--66c075ab1c61-osd--block--8a27db0d--e52c--5340--bfad--66c075ab1c61'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-tlZRre-3mqn-7hAq-j2kl-vH4H-yCfn-BXadiQ', 'scsi-0QEMU_QEMU_HARDDISK_98068efd-febf-4a3d-a208-2ec8969defa3', 'scsi-SQEMU_QEMU_HARDDISK_98068efd-febf-4a3d-a208-2ec8969defa3'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-05 00:58:11.079270 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_60b18cd9-cde4-4a47-bd2f-2a39c218ea3e', 'scsi-SQEMU_QEMU_HARDDISK_60b18cd9-cde4-4a47-bd2f-2a39c218ea3e'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_60b18cd9-cde4-4a47-bd2f-2a39c218ea3e-part1', 'scsi-SQEMU_QEMU_HARDDISK_60b18cd9-cde4-4a47-bd2f-2a39c218ea3e-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_60b18cd9-cde4-4a47-bd2f-2a39c218ea3e-part14', 'scsi-SQEMU_QEMU_HARDDISK_60b18cd9-cde4-4a47-bd2f-2a39c218ea3e-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_60b18cd9-cde4-4a47-bd2f-2a39c218ea3e-part15', 'scsi-SQEMU_QEMU_HARDDISK_60b18cd9-cde4-4a47-bd2f-2a39c218ea3e-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_60b18cd9-cde4-4a47-bd2f-2a39c218ea3e-part16', 'scsi-SQEMU_QEMU_HARDDISK_60b18cd9-cde4-4a47-bd2f-2a39c218ea3e-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-05 00:58:11.079294 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_89f7f52a-567c-4cab-9983-76602271fa86', 'scsi-SQEMU_QEMU_HARDDISK_89f7f52a-567c-4cab-9983-76602271fa86'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-05 00:58:11.079311 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'holders': ['ceph--84662fb7--c7ec--5f43--83c1--849532919194-osd--block--84662fb7--c7ec--5f43--83c1--849532919194'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-Jz3I4S-Jrdi-hSan-hrjR-5qZZ-pjy6-DhDLqU', 'scsi-0QEMU_QEMU_HARDDISK_cd3e0233-fa53-4a76-8124-17084efe5189', 'scsi-SQEMU_QEMU_HARDDISK_cd3e0233-fa53-4a76-8124-17084efe5189'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-05 00:58:11.079324 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-05-00-02-53-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-05 00:58:11.079337 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'holders': ['ceph--df39e39b--9449--5ecb--9afa--151663e06960-osd--block--df39e39b--9449--5ecb--9afa--151663e06960'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-VylcLb-buVg-z7iW-K22m-TKas-Ei58-re0uM9', 'scsi-0QEMU_QEMU_HARDDISK_38b6e962-bf0a-4437-92be-df56b43fc17a', 'scsi-SQEMU_QEMU_HARDDISK_38b6e962-bf0a-4437-92be-df56b43fc17a'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-05 00:58:11.079419 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ca139ca2-9428-4862-b2c5-b387113f92e8', 'scsi-SQEMU_QEMU_HARDDISK_ca139ca2-9428-4862-b2c5-b387113f92e8'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-05 00:58:11.079430 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:58:11.079438 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-05-00-03-00-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-05 00:58:11.079445 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--01ae77dd--7b74--52e9--8a2e--c19e3ec8ad7a-osd--block--01ae77dd--7b74--52e9--8a2e--c19e3ec8ad7a', 'dm-uuid-LVM-ZCrIUefZlnGHrpwArsx1M23Jvupc0s9GS9IrlP81CvONv0g7P0uPjtzc9mwvdwJL'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-04-05 00:58:11.079458 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--1dbeab33--88c6--544f--8f85--2175dc04d523-osd--block--1dbeab33--88c6--544f--8f85--2175dc04d523', 'dm-uuid-LVM-J6tB4UumkvukDnmlGPtlO0hmrLdkcf5efjG2SWt6Da1YZck7gyKdpi3JhxsDDh5X'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-04-05 00:58:11.079465 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-05 00:58:11.079472 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:58:11.079479 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-05 00:58:11.079486 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-05 00:58:11.079500 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-05 00:58:11.079572 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-05 00:58:11.079590 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-05 00:58:11.079601 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-05 00:58:11.079690 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-05 00:58:11.079745 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_af4f9d3c-bb5f-4922-ae65-7a1b824f675e', 'scsi-SQEMU_QEMU_HARDDISK_af4f9d3c-bb5f-4922-ae65-7a1b824f675e'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_af4f9d3c-bb5f-4922-ae65-7a1b824f675e-part1', 'scsi-SQEMU_QEMU_HARDDISK_af4f9d3c-bb5f-4922-ae65-7a1b824f675e-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_af4f9d3c-bb5f-4922-ae65-7a1b824f675e-part14', 'scsi-SQEMU_QEMU_HARDDISK_af4f9d3c-bb5f-4922-ae65-7a1b824f675e-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_af4f9d3c-bb5f-4922-ae65-7a1b824f675e-part15', 'scsi-SQEMU_QEMU_HARDDISK_af4f9d3c-bb5f-4922-ae65-7a1b824f675e-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_af4f9d3c-bb5f-4922-ae65-7a1b824f675e-part16', 'scsi-SQEMU_QEMU_HARDDISK_af4f9d3c-bb5f-4922-ae65-7a1b824f675e-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-05 00:58:11.079827 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'holders': ['ceph--01ae77dd--7b74--52e9--8a2e--c19e3ec8ad7a-osd--block--01ae77dd--7b74--52e9--8a2e--c19e3ec8ad7a'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-tgXYE1-zQXF-aFlN-fdHh-Wc5z-AMRd-c1q17F', 'scsi-0QEMU_QEMU_HARDDISK_50c87a36-4bc6-4e8b-871c-1038d731a8f6', 'scsi-SQEMU_QEMU_HARDDISK_50c87a36-4bc6-4e8b-871c-1038d731a8f6'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-05 00:58:11.079838 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'holders': ['ceph--1dbeab33--88c6--544f--8f85--2175dc04d523-osd--block--1dbeab33--88c6--544f--8f85--2175dc04d523'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-dyVz18-JZh1-rTMZ-E3Xl-m0dX-jcd7-Tl0RJt', 'scsi-0QEMU_QEMU_HARDDISK_16d4ab4f-df2e-4494-9775-e59359a49379', 'scsi-SQEMU_QEMU_HARDDISK_16d4ab4f-df2e-4494-9775-e59359a49379'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-05 00:58:11.079845 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f126a5b7-c683-4f0b-86b6-1ac9b9bcd2da', 'scsi-SQEMU_QEMU_HARDDISK_f126a5b7-c683-4f0b-86b6-1ac9b9bcd2da'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-05 00:58:11.079858 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-05-00-02-54-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-05 00:58:11.079866 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:58:11.079873 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-05 00:58:11.079880 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-05 00:58:11.079892 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-05 00:58:11.079943 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-05 00:58:11.079953 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-05 00:58:11.079960 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-05 00:58:11.079967 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-05 00:58:11.079974 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-05 00:58:11.079980 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-05 00:58:11.079991 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-05 00:58:11.079998 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-05 00:58:11.080060 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_cc38e3f2-573f-4547-a204-e1f48ae0a849', 'scsi-SQEMU_QEMU_HARDDISK_cc38e3f2-573f-4547-a204-e1f48ae0a849'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_cc38e3f2-573f-4547-a204-e1f48ae0a849-part1', 'scsi-SQEMU_QEMU_HARDDISK_cc38e3f2-573f-4547-a204-e1f48ae0a849-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_cc38e3f2-573f-4547-a204-e1f48ae0a849-part14', 'scsi-SQEMU_QEMU_HARDDISK_cc38e3f2-573f-4547-a204-e1f48ae0a849-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_cc38e3f2-573f-4547-a204-e1f48ae0a849-part15', 'scsi-SQEMU_QEMU_HARDDISK_cc38e3f2-573f-4547-a204-e1f48ae0a849-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_cc38e3f2-573f-4547-a204-e1f48ae0a849-part16', 'scsi-SQEMU_QEMU_HARDDISK_cc38e3f2-573f-4547-a204-e1f48ae0a849-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-05 00:58:11.080071 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-05 00:58:11.080078 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-05-00-02-57-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-05 00:58:11.080085 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:58:11.080096 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-05 00:58:11.080133 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-05 00:58:11.080141 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-05 00:58:11.080148 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-05 00:58:11.080267 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-05 00:58:11.080280 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-05 00:58:11.080287 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-05 00:58:11.080317 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-05 00:58:11.080335 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-05 00:58:11.080342 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-05 00:58:11.080354 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-05 00:58:11.080369 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-05 00:58:11.080427 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a9214721-eba3-44ac-9648-f7cb9ca525d3', 'scsi-SQEMU_QEMU_HARDDISK_a9214721-eba3-44ac-9648-f7cb9ca525d3'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a9214721-eba3-44ac-9648-f7cb9ca525d3-part1', 'scsi-SQEMU_QEMU_HARDDISK_a9214721-eba3-44ac-9648-f7cb9ca525d3-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a9214721-eba3-44ac-9648-f7cb9ca525d3-part14', 'scsi-SQEMU_QEMU_HARDDISK_a9214721-eba3-44ac-9648-f7cb9ca525d3-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a9214721-eba3-44ac-9648-f7cb9ca525d3-part15', 'scsi-SQEMU_QEMU_HARDDISK_a9214721-eba3-44ac-9648-f7cb9ca525d3-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a9214721-eba3-44ac-9648-f7cb9ca525d3-part16', 'scsi-SQEMU_QEMU_HARDDISK_a9214721-eba3-44ac-9648-f7cb9ca525d3-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-05 00:58:11.080442 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9d8fe4d0-47c6-47ce-9739-2701ccce9737', 'scsi-SQEMU_QEMU_HARDDISK_9d8fe4d0-47c6-47ce-9739-2701ccce9737'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9d8fe4d0-47c6-47ce-9739-2701ccce9737-part1', 'scsi-SQEMU_QEMU_HARDDISK_9d8fe4d0-47c6-47ce-9739-2701ccce9737-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9d8fe4d0-47c6-47ce-9739-2701ccce9737-part14', 'scsi-SQEMU_QEMU_HARDDISK_9d8fe4d0-47c6-47ce-9739-2701ccce9737-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9d8fe4d0-47c6-47ce-9739-2701ccce9737-part15', 'scsi-SQEMU_QEMU_HARDDISK_9d8fe4d0-47c6-47ce-9739-2701ccce9737-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9d8fe4d0-47c6-47ce-9739-2701ccce9737-part16', 'scsi-SQEMU_QEMU_HARDDISK_9d8fe4d0-47c6-47ce-9739-2701ccce9737-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-05 00:58:11.080456 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-05-00-02-55-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-05 00:58:11.080463 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:58:11.080503 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-05-00-02-58-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-05 00:58:11.080515 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:58:11.080526 | orchestrator | 2026-04-05 00:58:11.080538 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-04-05 00:58:11.080549 | orchestrator | Sunday 05 April 2026 00:47:25 +0000 (0:00:01.475) 0:00:37.321 ********** 2026-04-05 00:58:11.080560 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--9657aa76--f30a--575f--81fa--dc230eadde03-osd--block--9657aa76--f30a--575f--81fa--dc230eadde03', 'dm-uuid-LVM-8E1xuPLEYx1uTwydDUNwPMLUgzpgnl2IeAYMIzK2AO6YtTwcvavu13HOl75B9Evz'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 00:58:11.080573 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--84662fb7--c7ec--5f43--83c1--849532919194-osd--block--84662fb7--c7ec--5f43--83c1--849532919194', 'dm-uuid-LVM-7EWndWI44TagQTqMBy1Pv9rnP4tweZpxjHYUBR4fuE24TPDe2OzsOGLQTaEDcelq'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 00:58:11.080598 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--8a27db0d--e52c--5340--bfad--66c075ab1c61-osd--block--8a27db0d--e52c--5340--bfad--66c075ab1c61', 'dm-uuid-LVM-maRrlPjNmQL0H9aadu5k71QFHeXfjfdEipyIlSH5OXr1U4BAJhfLJgpSdP33B0eG'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 00:58:11.080610 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 00:58:11.080623 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 00:58:11.080687 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 00:58:11.080697 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--df39e39b--9449--5ecb--9afa--151663e06960-osd--block--df39e39b--9449--5ecb--9afa--151663e06960', 'dm-uuid-LVM-4HnV0lqPVUvHugf1jZmoUAfymWk8v99yHxbGxoLLfHrZ8usKW78V8J8BL2Nt20SL'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 00:58:11.080704 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 00:58:11.080721 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 00:58:11.080728 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 00:58:11.080735 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 00:58:11.080789 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 00:58:11.080799 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 00:58:11.080844 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6cf61d5b-3b59-4520-9cc2-8285b407910f', 'scsi-SQEMU_QEMU_HARDDISK_6cf61d5b-3b59-4520-9cc2-8285b407910f'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6cf61d5b-3b59-4520-9cc2-8285b407910f-part1', 'scsi-SQEMU_QEMU_HARDDISK_6cf61d5b-3b59-4520-9cc2-8285b407910f-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6cf61d5b-3b59-4520-9cc2-8285b407910f-part14', 'scsi-SQEMU_QEMU_HARDDISK_6cf61d5b-3b59-4520-9cc2-8285b407910f-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6cf61d5b-3b59-4520-9cc2-8285b407910f-part15', 'scsi-SQEMU_QEMU_HARDDISK_6cf61d5b-3b59-4520-9cc2-8285b407910f-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6cf61d5b-3b59-4520-9cc2-8285b407910f-part16', 'scsi-SQEMU_QEMU_HARDDISK_6cf61d5b-3b59-4520-9cc2-8285b407910f-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 00:58:11.080930 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--9657aa76--f30a--575f--81fa--dc230eadde03-osd--block--9657aa76--f30a--575f--81fa--dc230eadde03'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-RpfwZn-HTk4-RnlD-ne4J-uLT2-7pJ1-8ZtVeR', 'scsi-0QEMU_QEMU_HARDDISK_7e73ac44-76fe-4853-8c7e-76a35261b68e', 'scsi-SQEMU_QEMU_HARDDISK_7e73ac44-76fe-4853-8c7e-76a35261b68e'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 00:58:11.080942 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--8a27db0d--e52c--5340--bfad--66c075ab1c61-osd--block--8a27db0d--e52c--5340--bfad--66c075ab1c61'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-tlZRre-3mqn-7hAq-j2kl-vH4H-yCfn-BXadiQ', 'scsi-0QEMU_QEMU_HARDDISK_98068efd-febf-4a3d-a208-2ec8969defa3', 'scsi-SQEMU_QEMU_HARDDISK_98068efd-febf-4a3d-a208-2ec8969defa3'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 00:58:11.080950 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 00:58:11.080966 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_89f7f52a-567c-4cab-9983-76602271fa86', 'scsi-SQEMU_QEMU_HARDDISK_89f7f52a-567c-4cab-9983-76602271fa86'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 00:58:11.080974 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-05-00-02-53-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 00:58:11.080981 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 00:58:11.081079 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 00:58:11.081092 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--01ae77dd--7b74--52e9--8a2e--c19e3ec8ad7a-osd--block--01ae77dd--7b74--52e9--8a2e--c19e3ec8ad7a', 'dm-uuid-LVM-ZCrIUefZlnGHrpwArsx1M23Jvupc0s9GS9IrlP81CvONv0g7P0uPjtzc9mwvdwJL'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 00:58:11.081099 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 00:58:11.081172 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:58:11.081185 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--1dbeab33--88c6--544f--8f85--2175dc04d523-osd--block--1dbeab33--88c6--544f--8f85--2175dc04d523', 'dm-uuid-LVM-J6tB4UumkvukDnmlGPtlO0hmrLdkcf5efjG2SWt6Da1YZck7gyKdpi3JhxsDDh5X'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 00:58:11.081193 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 00:58:11.081200 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 00:58:11.081265 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 00:58:11.081275 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 00:58:11.081282 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 00:58:11.081296 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 00:58:11.081307 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 00:58:11.081315 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 00:58:11.081321 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 00:58:11.081374 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 00:58:11.081385 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 00:58:11.081397 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 00:58:11.081409 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 00:58:11.081417 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 00:58:11.081425 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 00:58:11.081612 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_60b18cd9-cde4-4a47-bd2f-2a39c218ea3e', 'scsi-SQEMU_QEMU_HARDDISK_60b18cd9-cde4-4a47-bd2f-2a39c218ea3e'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_60b18cd9-cde4-4a47-bd2f-2a39c218ea3e-part1', 'scsi-SQEMU_QEMU_HARDDISK_60b18cd9-cde4-4a47-bd2f-2a39c218ea3e-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_60b18cd9-cde4-4a47-bd2f-2a39c218ea3e-part14', 'scsi-SQEMU_QEMU_HARDDISK_60b18cd9-cde4-4a47-bd2f-2a39c218ea3e-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_60b18cd9-cde4-4a47-bd2f-2a39c218ea3e-part15', 'scsi-SQEMU_QEMU_HARDDISK_60b18cd9-cde4-4a47-bd2f-2a39c218ea3e-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_60b18cd9-cde4-4a47-bd2f-2a39c218ea3e-part16', 'scsi-SQEMU_QEMU_HARDDISK_60b18cd9-cde4-4a47-bd2f-2a39c218ea3e-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 00:58:11.081654 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--84662fb7--c7ec--5f43--83c1--849532919194-osd--block--84662fb7--c7ec--5f43--83c1--849532919194'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-Jz3I4S-Jrdi-hSan-hrjR-5qZZ-pjy6-DhDLqU', 'scsi-0QEMU_QEMU_HARDDISK_cd3e0233-fa53-4a76-8124-17084efe5189', 'scsi-SQEMU_QEMU_HARDDISK_cd3e0233-fa53-4a76-8124-17084efe5189'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 00:58:11.081667 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 00:58:11.081746 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--df39e39b--9449--5ecb--9afa--151663e06960-osd--block--df39e39b--9449--5ecb--9afa--151663e06960'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-VylcLb-buVg-z7iW-K22m-TKas-Ei58-re0uM9', 'scsi-0QEMU_QEMU_HARDDISK_38b6e962-bf0a-4437-92be-df56b43fc17a', 'scsi-SQEMU_QEMU_HARDDISK_38b6e962-bf0a-4437-92be-df56b43fc17a'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 00:58:11.081758 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 00:58:11.081776 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_af4f9d3c-bb5f-4922-ae65-7a1b824f675e', 'scsi-SQEMU_QEMU_HARDDISK_af4f9d3c-bb5f-4922-ae65-7a1b824f675e'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_af4f9d3c-bb5f-4922-ae65-7a1b824f675e-part1', 'scsi-SQEMU_QEMU_HARDDISK_af4f9d3c-bb5f-4922-ae65-7a1b824f675e-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_af4f9d3c-bb5f-4922-ae65-7a1b824f675e-part14', 'scsi-SQEMU_QEMU_HARDDISK_af4f9d3c-bb5f-4922-ae65-7a1b824f675e-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_af4f9d3c-bb5f-4922-ae65-7a1b824f675e-part15', 'scsi-SQEMU_QEMU_HARDDISK_af4f9d3c-bb5f-4922-ae65-7a1b824f675e-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_af4f9d3c-bb5f-4922-ae65-7a1b824f675e-part16', 'scsi-SQEMU_QEMU_HARDDISK_af4f9d3c-bb5f-4922-ae65-7a1b824f675e-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 00:58:11.081784 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ca139ca2-9428-4862-b2c5-b387113f92e8', 'scsi-SQEMU_QEMU_HARDDISK_ca139ca2-9428-4862-b2c5-b387113f92e8'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 00:58:11.081836 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--01ae77dd--7b74--52e9--8a2e--c19e3ec8ad7a-osd--block--01ae77dd--7b74--52e9--8a2e--c19e3ec8ad7a'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-tgXYE1-zQXF-aFlN-fdHh-Wc5z-AMRd-c1q17F', 'scsi-0QEMU_QEMU_HARDDISK_50c87a36-4bc6-4e8b-871c-1038d731a8f6', 'scsi-SQEMU_QEMU_HARDDISK_50c87a36-4bc6-4e8b-871c-1038d731a8f6'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 00:58:11.081850 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-05-00-03-00-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 00:58:11.081861 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--1dbeab33--88c6--544f--8f85--2175dc04d523-osd--block--1dbeab33--88c6--544f--8f85--2175dc04d523'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-dyVz18-JZh1-rTMZ-E3Xl-m0dX-jcd7-Tl0RJt', 'scsi-0QEMU_QEMU_HARDDISK_16d4ab4f-df2e-4494-9775-e59359a49379', 'scsi-SQEMU_QEMU_HARDDISK_16d4ab4f-df2e-4494-9775-e59359a49379'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 00:58:11.081868 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 00:58:11.081874 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 00:58:11.081880 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:58:11.081959 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f126a5b7-c683-4f0b-86b6-1ac9b9bcd2da', 'scsi-SQEMU_QEMU_HARDDISK_f126a5b7-c683-4f0b-86b6-1ac9b9bcd2da'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 00:58:11.081992 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 00:58:11.081999 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 00:58:11.082009 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 00:58:11.082058 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 00:58:11.082075 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 00:58:11.082241 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 00:58:11.082257 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-05-00-02-54-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 00:58:11.082272 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 00:58:11.082279 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 00:58:11.082346 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_cc38e3f2-573f-4547-a204-e1f48ae0a849', 'scsi-SQEMU_QEMU_HARDDISK_cc38e3f2-573f-4547-a204-e1f48ae0a849'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_cc38e3f2-573f-4547-a204-e1f48ae0a849-part1', 'scsi-SQEMU_QEMU_HARDDISK_cc38e3f2-573f-4547-a204-e1f48ae0a849-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_cc38e3f2-573f-4547-a204-e1f48ae0a849-part14', 'scsi-SQEMU_QEMU_HARDDISK_cc38e3f2-573f-4547-a204-e1f48ae0a849-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_cc38e3f2-573f-4547-a204-e1f48ae0a849-part15', 'scsi-SQEMU_QEMU_HARDDISK_cc38e3f2-573f-4547-a204-e1f48ae0a849-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_cc38e3f2-573f-4547-a204-e1f48ae0a849-part16', 'scsi-SQEMU_QEMU_HARDDISK_cc38e3f2-573f-4547-a204-e1f48ae0a849-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 00:58:11.082362 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:58:11.082373 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9d8fe4d0-47c6-47ce-9739-2701ccce9737', 'scsi-SQEMU_QEMU_HARDDISK_9d8fe4d0-47c6-47ce-9739-2701ccce9737'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9d8fe4d0-47c6-47ce-9739-2701ccce9737-part1', 'scsi-SQEMU_QEMU_HARDDISK_9d8fe4d0-47c6-47ce-9739-2701ccce9737-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9d8fe4d0-47c6-47ce-9739-2701ccce9737-part14', 'scsi-SQEMU_QEMU_HARDDISK_9d8fe4d0-47c6-47ce-9739-2701ccce9737-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9d8fe4d0-47c6-47ce-9739-2701ccce9737-part15', 'scsi-SQEMU_QEMU_HARDDISK_9d8fe4d0-47c6-47ce-9739-2701ccce9737-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9d8fe4d0-47c6-47ce-9739-2701ccce9737-part16', 'scsi-SQEMU_QEMU_HARDDISK_9d8fe4d0-47c6-47ce-9739-2701ccce9737-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 00:58:11.082382 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-05-00-02-57-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 00:58:11.082388 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:58:11.082437 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-05-00-02-58-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 00:58:11.082451 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:58:11.082458 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 00:58:11.082465 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 00:58:11.082471 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 00:58:11.082481 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 00:58:11.082488 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 00:58:11.082494 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 00:58:11.082571 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 00:58:11.082588 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 00:58:11.082606 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a9214721-eba3-44ac-9648-f7cb9ca525d3', 'scsi-SQEMU_QEMU_HARDDISK_a9214721-eba3-44ac-9648-f7cb9ca525d3'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a9214721-eba3-44ac-9648-f7cb9ca525d3-part1', 'scsi-SQEMU_QEMU_HARDDISK_a9214721-eba3-44ac-9648-f7cb9ca525d3-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a9214721-eba3-44ac-9648-f7cb9ca525d3-part14', 'scsi-SQEMU_QEMU_HARDDISK_a9214721-eba3-44ac-9648-f7cb9ca525d3-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a9214721-eba3-44ac-9648-f7cb9ca525d3-part15', 'scsi-SQEMU_QEMU_HARDDISK_a9214721-eba3-44ac-9648-f7cb9ca525d3-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a9214721-eba3-44ac-9648-f7cb9ca525d3-part16', 'scsi-SQEMU_QEMU_HARDDISK_a9214721-eba3-44ac-9648-f7cb9ca525d3-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 00:58:11.082619 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-05-00-02-55-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 00:58:11.082631 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:58:11.082638 | orchestrator | 2026-04-05 00:58:11.082692 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-04-05 00:58:11.082702 | orchestrator | Sunday 05 April 2026 00:47:27 +0000 (0:00:02.204) 0:00:39.526 ********** 2026-04-05 00:58:11.082708 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:58:11.082715 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:58:11.082721 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:58:11.082774 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:58:11.082802 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:58:11.082811 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:58:11.082820 | orchestrator | 2026-04-05 00:58:11.082830 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-04-05 00:58:11.082839 | orchestrator | Sunday 05 April 2026 00:47:29 +0000 (0:00:02.246) 0:00:41.773 ********** 2026-04-05 00:58:11.082849 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:58:11.082858 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:58:11.082867 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:58:11.082876 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:58:11.082884 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:58:11.082893 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:58:11.082903 | orchestrator | 2026-04-05 00:58:11.082914 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-04-05 00:58:11.082925 | orchestrator | Sunday 05 April 2026 00:47:32 +0000 (0:00:02.138) 0:00:43.911 ********** 2026-04-05 00:58:11.082969 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:58:11.082991 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:58:11.082998 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:58:11.083004 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:58:11.083010 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:58:11.083016 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:58:11.083022 | orchestrator | 2026-04-05 00:58:11.083028 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-04-05 00:58:11.083034 | orchestrator | Sunday 05 April 2026 00:47:33 +0000 (0:00:01.694) 0:00:45.605 ********** 2026-04-05 00:58:11.083041 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:58:11.083047 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:58:11.083053 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:58:11.083059 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:58:11.083065 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:58:11.083071 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:58:11.083077 | orchestrator | 2026-04-05 00:58:11.083083 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-04-05 00:58:11.083089 | orchestrator | Sunday 05 April 2026 00:47:34 +0000 (0:00:00.841) 0:00:46.447 ********** 2026-04-05 00:58:11.083095 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:58:11.083153 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:58:11.083162 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:58:11.083169 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:58:11.083175 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:58:11.083181 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:58:11.083188 | orchestrator | 2026-04-05 00:58:11.083194 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-04-05 00:58:11.083201 | orchestrator | Sunday 05 April 2026 00:47:35 +0000 (0:00:00.919) 0:00:47.367 ********** 2026-04-05 00:58:11.083208 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:58:11.083214 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:58:11.083220 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:58:11.083227 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:58:11.083242 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:58:11.083254 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:58:11.083260 | orchestrator | 2026-04-05 00:58:11.083267 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-04-05 00:58:11.083274 | orchestrator | Sunday 05 April 2026 00:47:36 +0000 (0:00:00.823) 0:00:48.191 ********** 2026-04-05 00:58:11.083280 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2026-04-05 00:58:11.083287 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2026-04-05 00:58:11.083293 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2026-04-05 00:58:11.083300 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2026-04-05 00:58:11.083306 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2026-04-05 00:58:11.083313 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-04-05 00:58:11.083319 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-0) 2026-04-05 00:58:11.083327 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2026-04-05 00:58:11.083335 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-2) 2026-04-05 00:58:11.083343 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-04-05 00:58:11.083351 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-0) 2026-04-05 00:58:11.083359 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-04-05 00:58:11.083367 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2026-04-05 00:58:11.083374 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2026-04-05 00:58:11.083382 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-1) 2026-04-05 00:58:11.083390 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2026-04-05 00:58:11.083397 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2026-04-05 00:58:11.083405 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2026-04-05 00:58:11.083413 | orchestrator | 2026-04-05 00:58:11.083421 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-04-05 00:58:11.083428 | orchestrator | Sunday 05 April 2026 00:47:41 +0000 (0:00:05.189) 0:00:53.381 ********** 2026-04-05 00:58:11.083436 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-04-05 00:58:11.083445 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-04-05 00:58:11.083452 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-04-05 00:58:11.083460 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:58:11.083468 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-04-05 00:58:11.083475 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-04-05 00:58:11.083483 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-04-05 00:58:11.083491 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:58:11.083499 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-04-05 00:58:11.083595 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-04-05 00:58:11.083610 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-04-05 00:58:11.083621 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:58:11.083633 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-04-05 00:58:11.083644 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-04-05 00:58:11.083654 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-04-05 00:58:11.083663 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:58:11.083669 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-04-05 00:58:11.083675 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-04-05 00:58:11.083681 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-04-05 00:58:11.083687 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:58:11.083693 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-04-05 00:58:11.083699 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-04-05 00:58:11.083712 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-04-05 00:58:11.083718 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:58:11.083724 | orchestrator | 2026-04-05 00:58:11.083731 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-04-05 00:58:11.083736 | orchestrator | Sunday 05 April 2026 00:47:43 +0000 (0:00:02.047) 0:00:55.428 ********** 2026-04-05 00:58:11.083741 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:58:11.083747 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:58:11.083752 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:58:11.083758 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-05 00:58:11.083763 | orchestrator | 2026-04-05 00:58:11.083769 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-04-05 00:58:11.083776 | orchestrator | Sunday 05 April 2026 00:47:45 +0000 (0:00:01.746) 0:00:57.175 ********** 2026-04-05 00:58:11.083781 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:58:11.083787 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:58:11.083792 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:58:11.083797 | orchestrator | 2026-04-05 00:58:11.083803 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-04-05 00:58:11.083808 | orchestrator | Sunday 05 April 2026 00:47:45 +0000 (0:00:00.482) 0:00:57.657 ********** 2026-04-05 00:58:11.083813 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:58:11.083819 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:58:11.083824 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:58:11.083829 | orchestrator | 2026-04-05 00:58:11.083835 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-04-05 00:58:11.083840 | orchestrator | Sunday 05 April 2026 00:47:46 +0000 (0:00:00.827) 0:00:58.484 ********** 2026-04-05 00:58:11.083845 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:58:11.083850 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:58:11.083856 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:58:11.083861 | orchestrator | 2026-04-05 00:58:11.083871 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-04-05 00:58:11.083877 | orchestrator | Sunday 05 April 2026 00:47:47 +0000 (0:00:00.473) 0:00:58.957 ********** 2026-04-05 00:58:11.083882 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:58:11.083887 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:58:11.083893 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:58:11.083898 | orchestrator | 2026-04-05 00:58:11.083903 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-04-05 00:58:11.083909 | orchestrator | Sunday 05 April 2026 00:47:47 +0000 (0:00:00.812) 0:00:59.770 ********** 2026-04-05 00:58:11.083914 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-05 00:58:11.083949 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-05 00:58:11.083955 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-05 00:58:11.083974 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:58:11.083980 | orchestrator | 2026-04-05 00:58:11.083985 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-04-05 00:58:11.083991 | orchestrator | Sunday 05 April 2026 00:47:48 +0000 (0:00:00.418) 0:01:00.189 ********** 2026-04-05 00:58:11.083996 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-05 00:58:11.084001 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-05 00:58:11.084007 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-05 00:58:11.084012 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:58:11.084017 | orchestrator | 2026-04-05 00:58:11.084023 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-04-05 00:58:11.084028 | orchestrator | Sunday 05 April 2026 00:47:48 +0000 (0:00:00.588) 0:01:00.777 ********** 2026-04-05 00:58:11.084033 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-05 00:58:11.084047 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-05 00:58:11.084053 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-05 00:58:11.084058 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:58:11.084063 | orchestrator | 2026-04-05 00:58:11.084069 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-04-05 00:58:11.084074 | orchestrator | Sunday 05 April 2026 00:47:49 +0000 (0:00:00.516) 0:01:01.294 ********** 2026-04-05 00:58:11.084079 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:58:11.084085 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:58:11.084090 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:58:11.084095 | orchestrator | 2026-04-05 00:58:11.084116 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-04-05 00:58:11.084122 | orchestrator | Sunday 05 April 2026 00:47:49 +0000 (0:00:00.396) 0:01:01.691 ********** 2026-04-05 00:58:11.084128 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-04-05 00:58:11.084133 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-04-05 00:58:11.084171 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-04-05 00:58:11.084181 | orchestrator | 2026-04-05 00:58:11.084189 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-04-05 00:58:11.084198 | orchestrator | Sunday 05 April 2026 00:47:50 +0000 (0:00:00.991) 0:01:02.683 ********** 2026-04-05 00:58:11.084205 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-05 00:58:11.084213 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-05 00:58:11.084222 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-05 00:58:11.084230 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-04-05 00:58:11.084238 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-04-05 00:58:11.084247 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-04-05 00:58:11.084257 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-04-05 00:58:11.084265 | orchestrator | 2026-04-05 00:58:11.084275 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-04-05 00:58:11.084281 | orchestrator | Sunday 05 April 2026 00:47:52 +0000 (0:00:01.258) 0:01:03.942 ********** 2026-04-05 00:58:11.084286 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-05 00:58:11.084291 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-05 00:58:11.084297 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-05 00:58:11.084302 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-04-05 00:58:11.084307 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-04-05 00:58:11.084313 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-04-05 00:58:11.084318 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-04-05 00:58:11.084323 | orchestrator | 2026-04-05 00:58:11.084329 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-04-05 00:58:11.084334 | orchestrator | Sunday 05 April 2026 00:47:54 +0000 (0:00:02.074) 0:01:06.016 ********** 2026-04-05 00:58:11.084340 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 00:58:11.084348 | orchestrator | 2026-04-05 00:58:11.084353 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-04-05 00:58:11.084358 | orchestrator | Sunday 05 April 2026 00:47:55 +0000 (0:00:01.404) 0:01:07.421 ********** 2026-04-05 00:58:11.084368 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 00:58:11.084380 | orchestrator | 2026-04-05 00:58:11.084386 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-04-05 00:58:11.084391 | orchestrator | Sunday 05 April 2026 00:47:56 +0000 (0:00:01.235) 0:01:08.656 ********** 2026-04-05 00:58:11.084396 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:58:11.084402 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:58:11.084407 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:58:11.084412 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:58:11.084418 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:58:11.084423 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:58:11.084428 | orchestrator | 2026-04-05 00:58:11.084434 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-04-05 00:58:11.084439 | orchestrator | Sunday 05 April 2026 00:47:58 +0000 (0:00:01.648) 0:01:10.304 ********** 2026-04-05 00:58:11.084445 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:58:11.084450 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:58:11.084455 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:58:11.084461 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:58:11.084466 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:58:11.084471 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:58:11.084477 | orchestrator | 2026-04-05 00:58:11.084482 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-04-05 00:58:11.084487 | orchestrator | Sunday 05 April 2026 00:47:59 +0000 (0:00:01.398) 0:01:11.702 ********** 2026-04-05 00:58:11.084493 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:58:11.084498 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:58:11.084503 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:58:11.084510 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:58:11.084519 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:58:11.084528 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:58:11.084536 | orchestrator | 2026-04-05 00:58:11.084545 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-04-05 00:58:11.084555 | orchestrator | Sunday 05 April 2026 00:48:02 +0000 (0:00:02.587) 0:01:14.290 ********** 2026-04-05 00:58:11.084563 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:58:11.084573 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:58:11.084581 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:58:11.084586 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:58:11.084592 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:58:11.084597 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:58:11.084602 | orchestrator | 2026-04-05 00:58:11.084608 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-04-05 00:58:11.084613 | orchestrator | Sunday 05 April 2026 00:48:03 +0000 (0:00:01.061) 0:01:15.351 ********** 2026-04-05 00:58:11.084618 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:58:11.084623 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:58:11.084629 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:58:11.084634 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:58:11.084639 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:58:11.084668 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:58:11.084675 | orchestrator | 2026-04-05 00:58:11.084680 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-04-05 00:58:11.084686 | orchestrator | Sunday 05 April 2026 00:48:04 +0000 (0:00:01.425) 0:01:16.777 ********** 2026-04-05 00:58:11.084691 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:58:11.084696 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:58:11.084702 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:58:11.084707 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:58:11.084713 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:58:11.084718 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:58:11.084723 | orchestrator | 2026-04-05 00:58:11.084729 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-04-05 00:58:11.084745 | orchestrator | Sunday 05 April 2026 00:48:06 +0000 (0:00:01.205) 0:01:17.982 ********** 2026-04-05 00:58:11.084750 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:58:11.084755 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:58:11.084761 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:58:11.084766 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:58:11.084771 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:58:11.084777 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:58:11.084782 | orchestrator | 2026-04-05 00:58:11.084787 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-04-05 00:58:11.084793 | orchestrator | Sunday 05 April 2026 00:48:06 +0000 (0:00:00.742) 0:01:18.725 ********** 2026-04-05 00:58:11.084798 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:58:11.084803 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:58:11.084809 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:58:11.084814 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:58:11.084819 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:58:11.084824 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:58:11.084830 | orchestrator | 2026-04-05 00:58:11.084835 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-04-05 00:58:11.084840 | orchestrator | Sunday 05 April 2026 00:48:08 +0000 (0:00:01.922) 0:01:20.648 ********** 2026-04-05 00:58:11.084846 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:58:11.084851 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:58:11.084856 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:58:11.084861 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:58:11.084867 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:58:11.084872 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:58:11.084877 | orchestrator | 2026-04-05 00:58:11.084883 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-04-05 00:58:11.084888 | orchestrator | Sunday 05 April 2026 00:48:10 +0000 (0:00:01.197) 0:01:21.846 ********** 2026-04-05 00:58:11.084893 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:58:11.084899 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:58:11.084904 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:58:11.084909 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:58:11.084915 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:58:11.084920 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:58:11.084925 | orchestrator | 2026-04-05 00:58:11.084931 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-04-05 00:58:11.084936 | orchestrator | Sunday 05 April 2026 00:48:11 +0000 (0:00:00.993) 0:01:22.840 ********** 2026-04-05 00:58:11.084941 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:58:11.084947 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:58:11.084952 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:58:11.084961 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:58:11.084966 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:58:11.084972 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:58:11.084977 | orchestrator | 2026-04-05 00:58:11.084982 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-04-05 00:58:11.084988 | orchestrator | Sunday 05 April 2026 00:48:11 +0000 (0:00:00.747) 0:01:23.588 ********** 2026-04-05 00:58:11.084993 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:58:11.084998 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:58:11.085004 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:58:11.085009 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:58:11.085014 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:58:11.085020 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:58:11.085025 | orchestrator | 2026-04-05 00:58:11.085030 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-04-05 00:58:11.085036 | orchestrator | Sunday 05 April 2026 00:48:12 +0000 (0:00:01.145) 0:01:24.733 ********** 2026-04-05 00:58:11.085041 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:58:11.085046 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:58:11.085052 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:58:11.085062 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:58:11.085067 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:58:11.085073 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:58:11.085078 | orchestrator | 2026-04-05 00:58:11.085084 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-04-05 00:58:11.085089 | orchestrator | Sunday 05 April 2026 00:48:13 +0000 (0:00:00.822) 0:01:25.556 ********** 2026-04-05 00:58:11.085094 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:58:11.085099 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:58:11.085137 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:58:11.085142 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:58:11.085148 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:58:11.085153 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:58:11.085158 | orchestrator | 2026-04-05 00:58:11.085164 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-04-05 00:58:11.085169 | orchestrator | Sunday 05 April 2026 00:48:15 +0000 (0:00:01.489) 0:01:27.045 ********** 2026-04-05 00:58:11.085174 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:58:11.085179 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:58:11.085185 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:58:11.085190 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:58:11.085195 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:58:11.085201 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:58:11.085206 | orchestrator | 2026-04-05 00:58:11.085211 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-04-05 00:58:11.085217 | orchestrator | Sunday 05 April 2026 00:48:16 +0000 (0:00:01.371) 0:01:28.417 ********** 2026-04-05 00:58:11.085222 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:58:11.085227 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:58:11.085233 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:58:11.085238 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:58:11.085262 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:58:11.085269 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:58:11.085274 | orchestrator | 2026-04-05 00:58:11.085280 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-04-05 00:58:11.085285 | orchestrator | Sunday 05 April 2026 00:48:17 +0000 (0:00:01.343) 0:01:29.760 ********** 2026-04-05 00:58:11.085290 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:58:11.085296 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:58:11.085301 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:58:11.085306 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:58:11.085312 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:58:11.085317 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:58:11.085322 | orchestrator | 2026-04-05 00:58:11.085328 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-04-05 00:58:11.085333 | orchestrator | Sunday 05 April 2026 00:48:18 +0000 (0:00:00.731) 0:01:30.492 ********** 2026-04-05 00:58:11.085338 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:58:11.085344 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:58:11.085349 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:58:11.085354 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:58:11.085360 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:58:11.085365 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:58:11.085370 | orchestrator | 2026-04-05 00:58:11.085376 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-04-05 00:58:11.085381 | orchestrator | Sunday 05 April 2026 00:48:19 +0000 (0:00:01.286) 0:01:31.778 ********** 2026-04-05 00:58:11.085386 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:58:11.085392 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:58:11.085397 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:58:11.085402 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:58:11.085407 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:58:11.085413 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:58:11.085418 | orchestrator | 2026-04-05 00:58:11.085423 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-04-05 00:58:11.085435 | orchestrator | Sunday 05 April 2026 00:48:21 +0000 (0:00:01.431) 0:01:33.210 ********** 2026-04-05 00:58:11.085441 | orchestrator | changed: [testbed-node-4] 2026-04-05 00:58:11.085446 | orchestrator | changed: [testbed-node-3] 2026-04-05 00:58:11.085452 | orchestrator | changed: [testbed-node-5] 2026-04-05 00:58:11.085457 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:58:11.085462 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:58:11.085468 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:58:11.085473 | orchestrator | 2026-04-05 00:58:11.085478 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-04-05 00:58:11.085484 | orchestrator | Sunday 05 April 2026 00:48:23 +0000 (0:00:01.869) 0:01:35.079 ********** 2026-04-05 00:58:11.085489 | orchestrator | changed: [testbed-node-4] 2026-04-05 00:58:11.085494 | orchestrator | changed: [testbed-node-3] 2026-04-05 00:58:11.085500 | orchestrator | changed: [testbed-node-5] 2026-04-05 00:58:11.085505 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:58:11.085513 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:58:11.085524 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:58:11.085533 | orchestrator | 2026-04-05 00:58:11.085542 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-04-05 00:58:11.085553 | orchestrator | Sunday 05 April 2026 00:48:25 +0000 (0:00:02.386) 0:01:37.465 ********** 2026-04-05 00:58:11.085567 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 00:58:11.085577 | orchestrator | 2026-04-05 00:58:11.085587 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2026-04-05 00:58:11.085592 | orchestrator | Sunday 05 April 2026 00:48:26 +0000 (0:00:01.100) 0:01:38.566 ********** 2026-04-05 00:58:11.085598 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:58:11.085603 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:58:11.085608 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:58:11.085614 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:58:11.085619 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:58:11.085624 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:58:11.085630 | orchestrator | 2026-04-05 00:58:11.085635 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2026-04-05 00:58:11.085641 | orchestrator | Sunday 05 April 2026 00:48:27 +0000 (0:00:00.554) 0:01:39.121 ********** 2026-04-05 00:58:11.085646 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:58:11.085651 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:58:11.085657 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:58:11.085662 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:58:11.085668 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:58:11.085673 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:58:11.085678 | orchestrator | 2026-04-05 00:58:11.085684 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2026-04-05 00:58:11.085689 | orchestrator | Sunday 05 April 2026 00:48:28 +0000 (0:00:00.800) 0:01:39.921 ********** 2026-04-05 00:58:11.085695 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-04-05 00:58:11.085700 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-04-05 00:58:11.085705 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-04-05 00:58:11.085711 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-04-05 00:58:11.085716 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-04-05 00:58:11.085721 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-04-05 00:58:11.085727 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-04-05 00:58:11.085732 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-04-05 00:58:11.085743 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-04-05 00:58:11.085749 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-04-05 00:58:11.085774 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-04-05 00:58:11.085781 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-04-05 00:58:11.085786 | orchestrator | 2026-04-05 00:58:11.085792 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2026-04-05 00:58:11.085797 | orchestrator | Sunday 05 April 2026 00:48:29 +0000 (0:00:01.326) 0:01:41.247 ********** 2026-04-05 00:58:11.085802 | orchestrator | changed: [testbed-node-3] 2026-04-05 00:58:11.085808 | orchestrator | changed: [testbed-node-4] 2026-04-05 00:58:11.085813 | orchestrator | changed: [testbed-node-5] 2026-04-05 00:58:11.085818 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:58:11.085824 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:58:11.085829 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:58:11.085834 | orchestrator | 2026-04-05 00:58:11.085839 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2026-04-05 00:58:11.085845 | orchestrator | Sunday 05 April 2026 00:48:30 +0000 (0:00:01.245) 0:01:42.493 ********** 2026-04-05 00:58:11.085850 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:58:11.085856 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:58:11.085861 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:58:11.085866 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:58:11.085871 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:58:11.085877 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:58:11.085882 | orchestrator | 2026-04-05 00:58:11.085887 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2026-04-05 00:58:11.085893 | orchestrator | Sunday 05 April 2026 00:48:31 +0000 (0:00:00.627) 0:01:43.121 ********** 2026-04-05 00:58:11.085898 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:58:11.085903 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:58:11.085908 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:58:11.085914 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:58:11.085919 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:58:11.085924 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:58:11.085929 | orchestrator | 2026-04-05 00:58:11.085935 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-04-05 00:58:11.085940 | orchestrator | Sunday 05 April 2026 00:48:32 +0000 (0:00:00.806) 0:01:43.928 ********** 2026-04-05 00:58:11.085945 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:58:11.085951 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:58:11.085956 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:58:11.085961 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:58:11.085966 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:58:11.085972 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:58:11.085977 | orchestrator | 2026-04-05 00:58:11.085982 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-04-05 00:58:11.085988 | orchestrator | Sunday 05 April 2026 00:48:32 +0000 (0:00:00.541) 0:01:44.469 ********** 2026-04-05 00:58:11.085993 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 00:58:11.085999 | orchestrator | 2026-04-05 00:58:11.086004 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2026-04-05 00:58:11.086013 | orchestrator | Sunday 05 April 2026 00:48:33 +0000 (0:00:01.213) 0:01:45.683 ********** 2026-04-05 00:58:11.086045 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:58:11.086051 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:58:11.086056 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:58:11.086062 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:58:11.086072 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:58:11.086077 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:58:11.086082 | orchestrator | 2026-04-05 00:58:11.086088 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2026-04-05 00:58:11.086093 | orchestrator | Sunday 05 April 2026 00:49:44 +0000 (0:01:10.611) 0:02:56.294 ********** 2026-04-05 00:58:11.086099 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-04-05 00:58:11.086122 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/prometheus:v2.7.2)  2026-04-05 00:58:11.086128 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/grafana/grafana:6.7.4)  2026-04-05 00:58:11.086133 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:58:11.086139 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-04-05 00:58:11.086144 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/prometheus:v2.7.2)  2026-04-05 00:58:11.086149 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/grafana/grafana:6.7.4)  2026-04-05 00:58:11.086155 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:58:11.086160 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-04-05 00:58:11.086165 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/prometheus:v2.7.2)  2026-04-05 00:58:11.086170 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/grafana/grafana:6.7.4)  2026-04-05 00:58:11.086176 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:58:11.086181 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-04-05 00:58:11.086186 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/prometheus:v2.7.2)  2026-04-05 00:58:11.086192 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/grafana/grafana:6.7.4)  2026-04-05 00:58:11.086197 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:58:11.086202 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-04-05 00:58:11.086207 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/prometheus:v2.7.2)  2026-04-05 00:58:11.086213 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/grafana/grafana:6.7.4)  2026-04-05 00:58:11.086218 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:58:11.086243 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-04-05 00:58:11.086250 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/prometheus:v2.7.2)  2026-04-05 00:58:11.086255 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/grafana/grafana:6.7.4)  2026-04-05 00:58:11.086261 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:58:11.086266 | orchestrator | 2026-04-05 00:58:11.086271 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2026-04-05 00:58:11.086277 | orchestrator | Sunday 05 April 2026 00:49:45 +0000 (0:00:00.877) 0:02:57.172 ********** 2026-04-05 00:58:11.086282 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:58:11.086287 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:58:11.086293 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:58:11.086298 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:58:11.086303 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:58:11.086309 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:58:11.086314 | orchestrator | 2026-04-05 00:58:11.086320 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2026-04-05 00:58:11.086325 | orchestrator | Sunday 05 April 2026 00:49:46 +0000 (0:00:01.054) 0:02:58.227 ********** 2026-04-05 00:58:11.086330 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:58:11.086336 | orchestrator | 2026-04-05 00:58:11.086341 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2026-04-05 00:58:11.086346 | orchestrator | Sunday 05 April 2026 00:49:46 +0000 (0:00:00.196) 0:02:58.423 ********** 2026-04-05 00:58:11.086352 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:58:11.086357 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:58:11.086368 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:58:11.086373 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:58:11.086379 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:58:11.086384 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:58:11.086389 | orchestrator | 2026-04-05 00:58:11.086395 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2026-04-05 00:58:11.086400 | orchestrator | Sunday 05 April 2026 00:49:47 +0000 (0:00:01.155) 0:02:59.579 ********** 2026-04-05 00:58:11.086405 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:58:11.086411 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:58:11.086416 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:58:11.086421 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:58:11.086427 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:58:11.086432 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:58:11.086437 | orchestrator | 2026-04-05 00:58:11.086443 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2026-04-05 00:58:11.086448 | orchestrator | Sunday 05 April 2026 00:49:49 +0000 (0:00:01.403) 0:03:00.983 ********** 2026-04-05 00:58:11.086454 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:58:11.086459 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:58:11.086464 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:58:11.086469 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:58:11.086475 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:58:11.086480 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:58:11.086486 | orchestrator | 2026-04-05 00:58:11.086491 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-04-05 00:58:11.086497 | orchestrator | Sunday 05 April 2026 00:49:49 +0000 (0:00:00.798) 0:03:01.781 ********** 2026-04-05 00:58:11.086502 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:58:11.086512 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:58:11.086523 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:58:11.086531 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:58:11.086541 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:58:11.086551 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:58:11.086560 | orchestrator | 2026-04-05 00:58:11.086570 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-04-05 00:58:11.086579 | orchestrator | Sunday 05 April 2026 00:49:52 +0000 (0:00:02.815) 0:03:04.597 ********** 2026-04-05 00:58:11.086585 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:58:11.086590 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:58:11.086596 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:58:11.086601 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:58:11.086606 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:58:11.086612 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:58:11.086617 | orchestrator | 2026-04-05 00:58:11.086622 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-04-05 00:58:11.086628 | orchestrator | Sunday 05 April 2026 00:49:53 +0000 (0:00:00.731) 0:03:05.328 ********** 2026-04-05 00:58:11.086634 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 00:58:11.086640 | orchestrator | 2026-04-05 00:58:11.086646 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2026-04-05 00:58:11.086651 | orchestrator | Sunday 05 April 2026 00:49:54 +0000 (0:00:01.168) 0:03:06.496 ********** 2026-04-05 00:58:11.086657 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:58:11.086662 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:58:11.086667 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:58:11.086673 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:58:11.086678 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:58:11.086683 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:58:11.086689 | orchestrator | 2026-04-05 00:58:11.086694 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2026-04-05 00:58:11.086705 | orchestrator | Sunday 05 April 2026 00:49:55 +0000 (0:00:00.689) 0:03:07.185 ********** 2026-04-05 00:58:11.086710 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:58:11.086716 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:58:11.086721 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:58:11.086726 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:58:11.086732 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:58:11.086737 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:58:11.086742 | orchestrator | 2026-04-05 00:58:11.086748 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2026-04-05 00:58:11.086753 | orchestrator | Sunday 05 April 2026 00:49:56 +0000 (0:00:00.938) 0:03:08.123 ********** 2026-04-05 00:58:11.086759 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:58:11.086764 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:58:11.086788 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:58:11.086794 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:58:11.086800 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:58:11.086805 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:58:11.086823 | orchestrator | 2026-04-05 00:58:11.086828 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2026-04-05 00:58:11.086834 | orchestrator | Sunday 05 April 2026 00:49:57 +0000 (0:00:00.722) 0:03:08.846 ********** 2026-04-05 00:58:11.086839 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:58:11.086844 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:58:11.086850 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:58:11.086855 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:58:11.086860 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:58:11.086866 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:58:11.086871 | orchestrator | 2026-04-05 00:58:11.086876 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2026-04-05 00:58:11.086882 | orchestrator | Sunday 05 April 2026 00:49:58 +0000 (0:00:01.320) 0:03:10.166 ********** 2026-04-05 00:58:11.086887 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:58:11.086892 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:58:11.086898 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:58:11.086903 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:58:11.086908 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:58:11.086914 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:58:11.086919 | orchestrator | 2026-04-05 00:58:11.086924 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2026-04-05 00:58:11.086930 | orchestrator | Sunday 05 April 2026 00:49:59 +0000 (0:00:01.137) 0:03:11.304 ********** 2026-04-05 00:58:11.086935 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:58:11.086940 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:58:11.086946 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:58:11.086951 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:58:11.086956 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:58:11.086961 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:58:11.086967 | orchestrator | 2026-04-05 00:58:11.086972 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2026-04-05 00:58:11.086977 | orchestrator | Sunday 05 April 2026 00:50:00 +0000 (0:00:01.200) 0:03:12.505 ********** 2026-04-05 00:58:11.086983 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:58:11.086988 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:58:11.086993 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:58:11.086999 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:58:11.087004 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:58:11.087009 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:58:11.087015 | orchestrator | 2026-04-05 00:58:11.087020 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2026-04-05 00:58:11.087025 | orchestrator | Sunday 05 April 2026 00:50:01 +0000 (0:00:00.932) 0:03:13.438 ********** 2026-04-05 00:58:11.087031 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:58:11.087046 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:58:11.087051 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:58:11.087056 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:58:11.087062 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:58:11.087067 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:58:11.087072 | orchestrator | 2026-04-05 00:58:11.087077 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2026-04-05 00:58:11.087087 | orchestrator | Sunday 05 April 2026 00:50:03 +0000 (0:00:01.409) 0:03:14.847 ********** 2026-04-05 00:58:11.087092 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:58:11.087098 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:58:11.087138 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:58:11.087144 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:58:11.087149 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:58:11.087155 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:58:11.087160 | orchestrator | 2026-04-05 00:58:11.087165 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-04-05 00:58:11.087171 | orchestrator | Sunday 05 April 2026 00:50:04 +0000 (0:00:01.537) 0:03:16.384 ********** 2026-04-05 00:58:11.087176 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 00:58:11.087182 | orchestrator | 2026-04-05 00:58:11.087187 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2026-04-05 00:58:11.087192 | orchestrator | Sunday 05 April 2026 00:50:05 +0000 (0:00:01.247) 0:03:17.632 ********** 2026-04-05 00:58:11.087198 | orchestrator | changed: [testbed-node-3] => (item=/etc/ceph) 2026-04-05 00:58:11.087203 | orchestrator | changed: [testbed-node-4] => (item=/etc/ceph) 2026-04-05 00:58:11.087208 | orchestrator | changed: [testbed-node-5] => (item=/etc/ceph) 2026-04-05 00:58:11.087214 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/) 2026-04-05 00:58:11.087219 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/) 2026-04-05 00:58:11.087224 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph) 2026-04-05 00:58:11.087230 | orchestrator | changed: [testbed-node-1] => (item=/etc/ceph) 2026-04-05 00:58:11.087235 | orchestrator | changed: [testbed-node-2] => (item=/etc/ceph) 2026-04-05 00:58:11.087241 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mon) 2026-04-05 00:58:11.087246 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/) 2026-04-05 00:58:11.087251 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mon) 2026-04-05 00:58:11.087256 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/) 2026-04-05 00:58:11.087262 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/) 2026-04-05 00:58:11.087267 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/) 2026-04-05 00:58:11.087272 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/osd) 2026-04-05 00:58:11.087278 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/osd) 2026-04-05 00:58:11.087283 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mon) 2026-04-05 00:58:11.087288 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/mon) 2026-04-05 00:58:11.087312 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/mon) 2026-04-05 00:58:11.087319 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/mon) 2026-04-05 00:58:11.087324 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mds) 2026-04-05 00:58:11.087330 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mds) 2026-04-05 00:58:11.087335 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/osd) 2026-04-05 00:58:11.087340 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/osd) 2026-04-05 00:58:11.087345 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/osd) 2026-04-05 00:58:11.087351 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/tmp) 2026-04-05 00:58:11.087356 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/tmp) 2026-04-05 00:58:11.087367 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/osd) 2026-04-05 00:58:11.087372 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mds) 2026-04-05 00:58:11.087378 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/mds) 2026-04-05 00:58:11.087383 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/mds) 2026-04-05 00:58:11.087388 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/crash) 2026-04-05 00:58:11.087394 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/crash) 2026-04-05 00:58:11.087399 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/mds) 2026-04-05 00:58:11.087404 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/tmp) 2026-04-05 00:58:11.087410 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/tmp) 2026-04-05 00:58:11.087415 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/tmp) 2026-04-05 00:58:11.087421 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/radosgw) 2026-04-05 00:58:11.087426 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/radosgw) 2026-04-05 00:58:11.087431 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/tmp) 2026-04-05 00:58:11.087437 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/crash) 2026-04-05 00:58:11.087442 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/crash) 2026-04-05 00:58:11.087447 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/crash) 2026-04-05 00:58:11.087453 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rgw) 2026-04-05 00:58:11.087458 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rgw) 2026-04-05 00:58:11.087463 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/radosgw) 2026-04-05 00:58:11.087469 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/radosgw) 2026-04-05 00:58:11.087474 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/crash) 2026-04-05 00:58:11.087479 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/radosgw) 2026-04-05 00:58:11.087485 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mgr) 2026-04-05 00:58:11.087490 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mgr) 2026-04-05 00:58:11.087499 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rgw) 2026-04-05 00:58:11.087504 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rgw) 2026-04-05 00:58:11.087510 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rgw) 2026-04-05 00:58:11.087515 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds) 2026-04-05 00:58:11.087520 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/radosgw) 2026-04-05 00:58:11.087525 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds) 2026-04-05 00:58:11.087531 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mgr) 2026-04-05 00:58:11.087536 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mgr) 2026-04-05 00:58:11.087541 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd) 2026-04-05 00:58:11.087547 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mgr) 2026-04-05 00:58:11.087552 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rgw) 2026-04-05 00:58:11.087557 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd) 2026-04-05 00:58:11.087563 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds) 2026-04-05 00:58:11.087568 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mds) 2026-04-05 00:58:11.087573 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd) 2026-04-05 00:58:11.087579 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mds) 2026-04-05 00:58:11.087584 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd) 2026-04-05 00:58:11.087598 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mgr) 2026-04-05 00:58:11.087604 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd) 2026-04-05 00:58:11.087609 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-osd) 2026-04-05 00:58:11.087614 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-osd) 2026-04-05 00:58:11.087620 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-04-05 00:58:11.087625 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd) 2026-04-05 00:58:11.087630 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mds) 2026-04-05 00:58:11.087636 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-04-05 00:58:11.087656 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd) 2026-04-05 00:58:11.087663 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd) 2026-04-05 00:58:11.087668 | orchestrator | changed: [testbed-node-4] => (item=/var/run/ceph) 2026-04-05 00:58:11.087673 | orchestrator | changed: [testbed-node-3] => (item=/var/run/ceph) 2026-04-05 00:58:11.087678 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-04-05 00:58:11.087682 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-04-05 00:58:11.087687 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-osd) 2026-04-05 00:58:11.087692 | orchestrator | changed: [testbed-node-4] => (item=/var/log/ceph) 2026-04-05 00:58:11.087697 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-04-05 00:58:11.087701 | orchestrator | changed: [testbed-node-3] => (item=/var/log/ceph) 2026-04-05 00:58:11.087706 | orchestrator | changed: [testbed-node-5] => (item=/var/run/ceph) 2026-04-05 00:58:11.087711 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd) 2026-04-05 00:58:11.087716 | orchestrator | changed: [testbed-node-0] => (item=/var/run/ceph) 2026-04-05 00:58:11.087720 | orchestrator | changed: [testbed-node-1] => (item=/var/run/ceph) 2026-04-05 00:58:11.087725 | orchestrator | changed: [testbed-node-5] => (item=/var/log/ceph) 2026-04-05 00:58:11.087730 | orchestrator | changed: [testbed-node-0] => (item=/var/log/ceph) 2026-04-05 00:58:11.087735 | orchestrator | changed: [testbed-node-1] => (item=/var/log/ceph) 2026-04-05 00:58:11.087739 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-04-05 00:58:11.087744 | orchestrator | changed: [testbed-node-2] => (item=/var/run/ceph) 2026-04-05 00:58:11.087749 | orchestrator | changed: [testbed-node-2] => (item=/var/log/ceph) 2026-04-05 00:58:11.087754 | orchestrator | 2026-04-05 00:58:11.087758 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-04-05 00:58:11.087763 | orchestrator | Sunday 05 April 2026 00:50:13 +0000 (0:00:07.993) 0:03:25.626 ********** 2026-04-05 00:58:11.087768 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:58:11.087773 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:58:11.087778 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:58:11.087783 | orchestrator | included: /ansible/roles/ceph-config/tasks/rgw_systemd_environment_file.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-05 00:58:11.087787 | orchestrator | 2026-04-05 00:58:11.087792 | orchestrator | TASK [ceph-config : Create rados gateway instance directories] ***************** 2026-04-05 00:58:11.087797 | orchestrator | Sunday 05 April 2026 00:50:15 +0000 (0:00:01.614) 0:03:27.240 ********** 2026-04-05 00:58:11.087802 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-04-05 00:58:11.087807 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-04-05 00:58:11.087812 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-04-05 00:58:11.087822 | orchestrator | 2026-04-05 00:58:11.087827 | orchestrator | TASK [ceph-config : Generate environment file] ********************************* 2026-04-05 00:58:11.087832 | orchestrator | Sunday 05 April 2026 00:50:16 +0000 (0:00:00.931) 0:03:28.172 ********** 2026-04-05 00:58:11.087836 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-04-05 00:58:11.087841 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-04-05 00:58:11.087846 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-04-05 00:58:11.087851 | orchestrator | 2026-04-05 00:58:11.087856 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-04-05 00:58:11.087860 | orchestrator | Sunday 05 April 2026 00:50:18 +0000 (0:00:01.684) 0:03:29.857 ********** 2026-04-05 00:58:11.087865 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:58:11.087870 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:58:11.087875 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:58:11.087880 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:58:11.087884 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:58:11.087889 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:58:11.087894 | orchestrator | 2026-04-05 00:58:11.087898 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-04-05 00:58:11.087903 | orchestrator | Sunday 05 April 2026 00:50:19 +0000 (0:00:01.163) 0:03:31.021 ********** 2026-04-05 00:58:11.087908 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:58:11.087913 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:58:11.087918 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:58:11.087922 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:58:11.087927 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:58:11.087932 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:58:11.087936 | orchestrator | 2026-04-05 00:58:11.087941 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-04-05 00:58:11.087946 | orchestrator | Sunday 05 April 2026 00:50:20 +0000 (0:00:01.088) 0:03:32.110 ********** 2026-04-05 00:58:11.087951 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:58:11.087955 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:58:11.087960 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:58:11.087965 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:58:11.087970 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:58:11.087974 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:58:11.087979 | orchestrator | 2026-04-05 00:58:11.087998 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-04-05 00:58:11.088004 | orchestrator | Sunday 05 April 2026 00:50:21 +0000 (0:00:00.730) 0:03:32.841 ********** 2026-04-05 00:58:11.088008 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:58:11.088013 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:58:11.088018 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:58:11.088022 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:58:11.088027 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:58:11.088032 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:58:11.088036 | orchestrator | 2026-04-05 00:58:11.088041 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-04-05 00:58:11.088046 | orchestrator | Sunday 05 April 2026 00:50:21 +0000 (0:00:00.831) 0:03:33.672 ********** 2026-04-05 00:58:11.088051 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:58:11.088055 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:58:11.088060 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:58:11.088065 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:58:11.088069 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:58:11.088074 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:58:11.088079 | orchestrator | 2026-04-05 00:58:11.088088 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-04-05 00:58:11.088093 | orchestrator | Sunday 05 April 2026 00:50:22 +0000 (0:00:00.960) 0:03:34.633 ********** 2026-04-05 00:58:11.088098 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:58:11.088116 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:58:11.088121 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:58:11.088126 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:58:11.088130 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:58:11.088135 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:58:11.088140 | orchestrator | 2026-04-05 00:58:11.088144 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-04-05 00:58:11.088149 | orchestrator | Sunday 05 April 2026 00:50:23 +0000 (0:00:00.663) 0:03:35.297 ********** 2026-04-05 00:58:11.088154 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:58:11.088159 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:58:11.088163 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:58:11.088168 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:58:11.088173 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:58:11.088177 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:58:11.088182 | orchestrator | 2026-04-05 00:58:11.088187 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-04-05 00:58:11.088192 | orchestrator | Sunday 05 April 2026 00:50:24 +0000 (0:00:01.038) 0:03:36.335 ********** 2026-04-05 00:58:11.088197 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:58:11.088201 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:58:11.088206 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:58:11.088211 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:58:11.088215 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:58:11.088220 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:58:11.088225 | orchestrator | 2026-04-05 00:58:11.088292 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-04-05 00:58:11.088312 | orchestrator | Sunday 05 April 2026 00:50:25 +0000 (0:00:00.690) 0:03:37.026 ********** 2026-04-05 00:58:11.088317 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:58:11.088325 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:58:11.088330 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:58:11.088335 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:58:11.088340 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:58:11.088344 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:58:11.088349 | orchestrator | 2026-04-05 00:58:11.088354 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-04-05 00:58:11.088359 | orchestrator | Sunday 05 April 2026 00:50:27 +0000 (0:00:01.968) 0:03:38.995 ********** 2026-04-05 00:58:11.088364 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:58:11.088368 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:58:11.088373 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:58:11.088378 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:58:11.088383 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:58:11.088387 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:58:11.088392 | orchestrator | 2026-04-05 00:58:11.088397 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-04-05 00:58:11.088402 | orchestrator | Sunday 05 April 2026 00:50:27 +0000 (0:00:00.695) 0:03:39.691 ********** 2026-04-05 00:58:11.088407 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:58:11.088411 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:58:11.088416 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:58:11.088421 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:58:11.088425 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:58:11.088430 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:58:11.088435 | orchestrator | 2026-04-05 00:58:11.088440 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-04-05 00:58:11.088445 | orchestrator | Sunday 05 April 2026 00:50:28 +0000 (0:00:01.066) 0:03:40.758 ********** 2026-04-05 00:58:11.088455 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:58:11.088460 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:58:11.088465 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:58:11.088469 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:58:11.088474 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:58:11.088479 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:58:11.088483 | orchestrator | 2026-04-05 00:58:11.088488 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-04-05 00:58:11.088493 | orchestrator | Sunday 05 April 2026 00:50:29 +0000 (0:00:00.645) 0:03:41.404 ********** 2026-04-05 00:58:11.088498 | orchestrator | ok: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-04-05 00:58:11.088503 | orchestrator | ok: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-04-05 00:58:11.088508 | orchestrator | ok: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-04-05 00:58:11.088513 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:58:11.088539 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:58:11.088544 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:58:11.088549 | orchestrator | 2026-04-05 00:58:11.088554 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-04-05 00:58:11.088559 | orchestrator | Sunday 05 April 2026 00:50:30 +0000 (0:00:00.973) 0:03:42.378 ********** 2026-04-05 00:58:11.088566 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log'}])  2026-04-05 00:58:11.088573 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.13:8081'}])  2026-04-05 00:58:11.088579 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:58:11.088584 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log'}])  2026-04-05 00:58:11.088589 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.14:8081'}])  2026-04-05 00:58:11.088594 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:58:11.088599 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log'}])  2026-04-05 00:58:11.088607 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.15:8081'}])  2026-04-05 00:58:11.088612 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:58:11.088617 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:58:11.088622 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:58:11.088626 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:58:11.088635 | orchestrator | 2026-04-05 00:58:11.088640 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-04-05 00:58:11.088645 | orchestrator | Sunday 05 April 2026 00:50:31 +0000 (0:00:00.872) 0:03:43.250 ********** 2026-04-05 00:58:11.088650 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:58:11.088654 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:58:11.088659 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:58:11.088664 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:58:11.088668 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:58:11.088673 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:58:11.088678 | orchestrator | 2026-04-05 00:58:11.088683 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-04-05 00:58:11.088687 | orchestrator | Sunday 05 April 2026 00:50:32 +0000 (0:00:00.953) 0:03:44.204 ********** 2026-04-05 00:58:11.088692 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:58:11.088697 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:58:11.088702 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:58:11.088706 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:58:11.088711 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:58:11.088716 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:58:11.088720 | orchestrator | 2026-04-05 00:58:11.088725 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-04-05 00:58:11.088730 | orchestrator | Sunday 05 April 2026 00:50:33 +0000 (0:00:00.688) 0:03:44.892 ********** 2026-04-05 00:58:11.088735 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:58:11.088739 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:58:11.088744 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:58:11.088749 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:58:11.088754 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:58:11.088758 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:58:11.088763 | orchestrator | 2026-04-05 00:58:11.088768 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-04-05 00:58:11.088772 | orchestrator | Sunday 05 April 2026 00:50:34 +0000 (0:00:00.918) 0:03:45.810 ********** 2026-04-05 00:58:11.088777 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:58:11.088782 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:58:11.088787 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:58:11.088791 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:58:11.088796 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:58:11.088801 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:58:11.088806 | orchestrator | 2026-04-05 00:58:11.088810 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-04-05 00:58:11.088830 | orchestrator | Sunday 05 April 2026 00:50:34 +0000 (0:00:00.638) 0:03:46.448 ********** 2026-04-05 00:58:11.088836 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:58:11.088841 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:58:11.088846 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:58:11.088850 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:58:11.088855 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:58:11.088860 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:58:11.088864 | orchestrator | 2026-04-05 00:58:11.088869 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-04-05 00:58:11.088874 | orchestrator | Sunday 05 April 2026 00:50:35 +0000 (0:00:00.908) 0:03:47.357 ********** 2026-04-05 00:58:11.088879 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:58:11.088883 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:58:11.088888 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:58:11.088893 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:58:11.088898 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:58:11.088902 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:58:11.088907 | orchestrator | 2026-04-05 00:58:11.088912 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-04-05 00:58:11.088920 | orchestrator | Sunday 05 April 2026 00:50:37 +0000 (0:00:01.443) 0:03:48.801 ********** 2026-04-05 00:58:11.088925 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-05 00:58:11.088930 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-05 00:58:11.088935 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-05 00:58:11.088939 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:58:11.088944 | orchestrator | 2026-04-05 00:58:11.088949 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-04-05 00:58:11.088958 | orchestrator | Sunday 05 April 2026 00:50:37 +0000 (0:00:00.871) 0:03:49.672 ********** 2026-04-05 00:58:11.088966 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-05 00:58:11.088974 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-05 00:58:11.088982 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-05 00:58:11.088990 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:58:11.088997 | orchestrator | 2026-04-05 00:58:11.089005 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-04-05 00:58:11.089014 | orchestrator | Sunday 05 April 2026 00:50:38 +0000 (0:00:00.847) 0:03:50.519 ********** 2026-04-05 00:58:11.089023 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-05 00:58:11.089031 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-05 00:58:11.089037 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-05 00:58:11.089041 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:58:11.089046 | orchestrator | 2026-04-05 00:58:11.089051 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-04-05 00:58:11.089056 | orchestrator | Sunday 05 April 2026 00:50:39 +0000 (0:00:00.990) 0:03:51.510 ********** 2026-04-05 00:58:11.089061 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:58:11.089065 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:58:11.089070 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:58:11.089075 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:58:11.089079 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:58:11.089088 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:58:11.089093 | orchestrator | 2026-04-05 00:58:11.089098 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-04-05 00:58:11.089139 | orchestrator | Sunday 05 April 2026 00:50:40 +0000 (0:00:00.867) 0:03:52.377 ********** 2026-04-05 00:58:11.089145 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-04-05 00:58:11.089150 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-04-05 00:58:11.089154 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-04-05 00:58:11.089159 | orchestrator | skipping: [testbed-node-0] => (item=0)  2026-04-05 00:58:11.089164 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:58:11.089169 | orchestrator | skipping: [testbed-node-1] => (item=0)  2026-04-05 00:58:11.089173 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:58:11.089178 | orchestrator | skipping: [testbed-node-2] => (item=0)  2026-04-05 00:58:11.089183 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:58:11.089187 | orchestrator | 2026-04-05 00:58:11.089192 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-04-05 00:58:11.089197 | orchestrator | Sunday 05 April 2026 00:50:42 +0000 (0:00:02.094) 0:03:54.472 ********** 2026-04-05 00:58:11.089202 | orchestrator | changed: [testbed-node-3] 2026-04-05 00:58:11.089206 | orchestrator | changed: [testbed-node-5] 2026-04-05 00:58:11.089211 | orchestrator | changed: [testbed-node-4] 2026-04-05 00:58:11.089216 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:58:11.089221 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:58:11.089225 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:58:11.089230 | orchestrator | 2026-04-05 00:58:11.089235 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-04-05 00:58:11.089240 | orchestrator | Sunday 05 April 2026 00:50:45 +0000 (0:00:02.879) 0:03:57.352 ********** 2026-04-05 00:58:11.089250 | orchestrator | changed: [testbed-node-3] 2026-04-05 00:58:11.089255 | orchestrator | changed: [testbed-node-4] 2026-04-05 00:58:11.089259 | orchestrator | changed: [testbed-node-5] 2026-04-05 00:58:11.089264 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:58:11.089269 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:58:11.089274 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:58:11.089278 | orchestrator | 2026-04-05 00:58:11.089283 | orchestrator | RUNNING HANDLER [ceph-handler : Mons handler] ********************************** 2026-04-05 00:58:11.089288 | orchestrator | Sunday 05 April 2026 00:50:46 +0000 (0:00:01.389) 0:03:58.741 ********** 2026-04-05 00:58:11.089293 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:58:11.089298 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:58:11.089302 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:58:11.089307 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mons.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 00:58:11.089312 | orchestrator | 2026-04-05 00:58:11.089317 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called before restart] ******** 2026-04-05 00:58:11.089340 | orchestrator | Sunday 05 April 2026 00:50:47 +0000 (0:00:00.877) 0:03:59.619 ********** 2026-04-05 00:58:11.089346 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:58:11.089351 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:58:11.089356 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:58:11.089361 | orchestrator | 2026-04-05 00:58:11.089365 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mon restart script] *********************** 2026-04-05 00:58:11.089370 | orchestrator | Sunday 05 April 2026 00:50:48 +0000 (0:00:00.289) 0:03:59.908 ********** 2026-04-05 00:58:11.089375 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:58:11.089380 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:58:11.089385 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:58:11.089390 | orchestrator | 2026-04-05 00:58:11.089394 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mon daemon(s)] ******************** 2026-04-05 00:58:11.089399 | orchestrator | Sunday 05 April 2026 00:50:49 +0000 (0:00:01.163) 0:04:01.072 ********** 2026-04-05 00:58:11.089403 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-04-05 00:58:11.089408 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-04-05 00:58:11.089412 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-04-05 00:58:11.089417 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:58:11.089421 | orchestrator | 2026-04-05 00:58:11.089426 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called after restart] ********* 2026-04-05 00:58:11.089430 | orchestrator | Sunday 05 April 2026 00:50:50 +0000 (0:00:00.805) 0:04:01.877 ********** 2026-04-05 00:58:11.089435 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:58:11.089439 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:58:11.089444 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:58:11.089449 | orchestrator | 2026-04-05 00:58:11.089453 | orchestrator | RUNNING HANDLER [ceph-handler : Osds handler] ********************************** 2026-04-05 00:58:11.089458 | orchestrator | Sunday 05 April 2026 00:50:50 +0000 (0:00:00.313) 0:04:02.191 ********** 2026-04-05 00:58:11.089462 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:58:11.089467 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:58:11.089471 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:58:11.089476 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-05 00:58:11.089480 | orchestrator | 2026-04-05 00:58:11.089485 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact trigger_restart] ********************** 2026-04-05 00:58:11.089489 | orchestrator | Sunday 05 April 2026 00:50:51 +0000 (0:00:01.227) 0:04:03.419 ********** 2026-04-05 00:58:11.089494 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-05 00:58:11.089498 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-05 00:58:11.089503 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-05 00:58:11.089507 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:58:11.089516 | orchestrator | 2026-04-05 00:58:11.089520 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called before restart] ******** 2026-04-05 00:58:11.089525 | orchestrator | Sunday 05 April 2026 00:50:52 +0000 (0:00:00.479) 0:04:03.899 ********** 2026-04-05 00:58:11.089529 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:58:11.089534 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:58:11.089538 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:58:11.089543 | orchestrator | 2026-04-05 00:58:11.089547 | orchestrator | RUNNING HANDLER [ceph-handler : Unset noup flag] ******************************* 2026-04-05 00:58:11.089555 | orchestrator | Sunday 05 April 2026 00:50:52 +0000 (0:00:00.815) 0:04:04.714 ********** 2026-04-05 00:58:11.089560 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:58:11.089564 | orchestrator | 2026-04-05 00:58:11.089569 | orchestrator | RUNNING HANDLER [ceph-handler : Copy osd restart script] *********************** 2026-04-05 00:58:11.089573 | orchestrator | Sunday 05 April 2026 00:50:53 +0000 (0:00:00.374) 0:04:05.089 ********** 2026-04-05 00:58:11.089578 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:58:11.089582 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:58:11.089587 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:58:11.089591 | orchestrator | 2026-04-05 00:58:11.089596 | orchestrator | RUNNING HANDLER [ceph-handler : Get pool list] ********************************* 2026-04-05 00:58:11.089601 | orchestrator | Sunday 05 April 2026 00:50:53 +0000 (0:00:00.524) 0:04:05.613 ********** 2026-04-05 00:58:11.089605 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:58:11.089610 | orchestrator | 2026-04-05 00:58:11.089614 | orchestrator | RUNNING HANDLER [ceph-handler : Get balancer module status] ******************** 2026-04-05 00:58:11.089619 | orchestrator | Sunday 05 April 2026 00:50:54 +0000 (0:00:00.249) 0:04:05.863 ********** 2026-04-05 00:58:11.089623 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:58:11.089628 | orchestrator | 2026-04-05 00:58:11.089632 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact pools_pgautoscaler_mode] ************** 2026-04-05 00:58:11.089637 | orchestrator | Sunday 05 April 2026 00:50:54 +0000 (0:00:00.242) 0:04:06.106 ********** 2026-04-05 00:58:11.089641 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:58:11.089646 | orchestrator | 2026-04-05 00:58:11.089650 | orchestrator | RUNNING HANDLER [ceph-handler : Disable balancer] ****************************** 2026-04-05 00:58:11.089655 | orchestrator | Sunday 05 April 2026 00:50:54 +0000 (0:00:00.181) 0:04:06.287 ********** 2026-04-05 00:58:11.089659 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:58:11.089664 | orchestrator | 2026-04-05 00:58:11.089668 | orchestrator | RUNNING HANDLER [ceph-handler : Disable pg autoscale on pools] ***************** 2026-04-05 00:58:11.089673 | orchestrator | Sunday 05 April 2026 00:50:54 +0000 (0:00:00.243) 0:04:06.531 ********** 2026-04-05 00:58:11.089677 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:58:11.089682 | orchestrator | 2026-04-05 00:58:11.089686 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph osds daemon(s)] ******************* 2026-04-05 00:58:11.089691 | orchestrator | Sunday 05 April 2026 00:50:54 +0000 (0:00:00.227) 0:04:06.759 ********** 2026-04-05 00:58:11.089695 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-05 00:58:11.089700 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-05 00:58:11.089704 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-05 00:58:11.089709 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:58:11.089713 | orchestrator | 2026-04-05 00:58:11.089718 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called after restart] ********* 2026-04-05 00:58:11.089736 | orchestrator | Sunday 05 April 2026 00:50:55 +0000 (0:00:00.769) 0:04:07.528 ********** 2026-04-05 00:58:11.089741 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:58:11.089746 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:58:11.089750 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:58:11.089755 | orchestrator | 2026-04-05 00:58:11.089759 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable pg autoscale on pools] *************** 2026-04-05 00:58:11.089764 | orchestrator | Sunday 05 April 2026 00:50:56 +0000 (0:00:00.701) 0:04:08.230 ********** 2026-04-05 00:58:11.089773 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:58:11.089778 | orchestrator | 2026-04-05 00:58:11.089782 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable balancer] **************************** 2026-04-05 00:58:11.089787 | orchestrator | Sunday 05 April 2026 00:50:56 +0000 (0:00:00.259) 0:04:08.490 ********** 2026-04-05 00:58:11.089791 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:58:11.089796 | orchestrator | 2026-04-05 00:58:11.089800 | orchestrator | RUNNING HANDLER [ceph-handler : Mdss handler] ********************************** 2026-04-05 00:58:11.089805 | orchestrator | Sunday 05 April 2026 00:50:56 +0000 (0:00:00.244) 0:04:08.735 ********** 2026-04-05 00:58:11.089809 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:58:11.089814 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:58:11.089818 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:58:11.089823 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mdss.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-05 00:58:11.089827 | orchestrator | 2026-04-05 00:58:11.089832 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called before restart] ******** 2026-04-05 00:58:11.089836 | orchestrator | Sunday 05 April 2026 00:50:58 +0000 (0:00:01.153) 0:04:09.888 ********** 2026-04-05 00:58:11.089841 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:58:11.089846 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:58:11.089850 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:58:11.089855 | orchestrator | 2026-04-05 00:58:11.089859 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mds restart script] *********************** 2026-04-05 00:58:11.089864 | orchestrator | Sunday 05 April 2026 00:50:58 +0000 (0:00:00.694) 0:04:10.582 ********** 2026-04-05 00:58:11.089868 | orchestrator | changed: [testbed-node-4] 2026-04-05 00:58:11.089873 | orchestrator | changed: [testbed-node-3] 2026-04-05 00:58:11.089877 | orchestrator | changed: [testbed-node-5] 2026-04-05 00:58:11.089881 | orchestrator | 2026-04-05 00:58:11.089886 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mds daemon(s)] ******************** 2026-04-05 00:58:11.089890 | orchestrator | Sunday 05 April 2026 00:51:00 +0000 (0:00:01.825) 0:04:12.408 ********** 2026-04-05 00:58:11.089895 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-05 00:58:11.089899 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-05 00:58:11.089904 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-05 00:58:11.089908 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:58:11.089913 | orchestrator | 2026-04-05 00:58:11.089917 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called after restart] ********* 2026-04-05 00:58:11.089922 | orchestrator | Sunday 05 April 2026 00:51:01 +0000 (0:00:00.637) 0:04:13.046 ********** 2026-04-05 00:58:11.089926 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:58:11.089931 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:58:11.089935 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:58:11.089940 | orchestrator | 2026-04-05 00:58:11.089947 | orchestrator | RUNNING HANDLER [ceph-handler : Rgws handler] ********************************** 2026-04-05 00:58:11.089955 | orchestrator | Sunday 05 April 2026 00:51:01 +0000 (0:00:00.471) 0:04:13.518 ********** 2026-04-05 00:58:11.089963 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:58:11.089970 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:58:11.089979 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:58:11.089986 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_rgws.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-05 00:58:11.089993 | orchestrator | 2026-04-05 00:58:11.090001 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called before restart] ******** 2026-04-05 00:58:11.090009 | orchestrator | Sunday 05 April 2026 00:51:03 +0000 (0:00:01.485) 0:04:15.004 ********** 2026-04-05 00:58:11.090044 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:58:11.090053 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:58:11.090059 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:58:11.090067 | orchestrator | 2026-04-05 00:58:11.090074 | orchestrator | RUNNING HANDLER [ceph-handler : Copy rgw restart script] *********************** 2026-04-05 00:58:11.090088 | orchestrator | Sunday 05 April 2026 00:51:03 +0000 (0:00:00.384) 0:04:15.388 ********** 2026-04-05 00:58:11.090096 | orchestrator | changed: [testbed-node-3] 2026-04-05 00:58:11.090115 | orchestrator | changed: [testbed-node-4] 2026-04-05 00:58:11.090119 | orchestrator | changed: [testbed-node-5] 2026-04-05 00:58:11.090124 | orchestrator | 2026-04-05 00:58:11.090129 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph rgw daemon(s)] ******************** 2026-04-05 00:58:11.090133 | orchestrator | Sunday 05 April 2026 00:51:05 +0000 (0:00:01.854) 0:04:17.243 ********** 2026-04-05 00:58:11.090138 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-05 00:58:11.090142 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-05 00:58:11.090147 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-05 00:58:11.090151 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:58:11.090156 | orchestrator | 2026-04-05 00:58:11.090160 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called after restart] ********* 2026-04-05 00:58:11.090165 | orchestrator | Sunday 05 April 2026 00:51:06 +0000 (0:00:00.969) 0:04:18.212 ********** 2026-04-05 00:58:11.090169 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:58:11.090174 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:58:11.090178 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:58:11.090183 | orchestrator | 2026-04-05 00:58:11.090187 | orchestrator | RUNNING HANDLER [ceph-handler : Rbdmirrors handler] **************************** 2026-04-05 00:58:11.090192 | orchestrator | Sunday 05 April 2026 00:51:07 +0000 (0:00:00.605) 0:04:18.818 ********** 2026-04-05 00:58:11.090196 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:58:11.090201 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:58:11.090205 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:58:11.090210 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:58:11.090214 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:58:11.090237 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:58:11.090243 | orchestrator | 2026-04-05 00:58:11.090248 | orchestrator | RUNNING HANDLER [ceph-handler : Mgrs handler] ********************************** 2026-04-05 00:58:11.090252 | orchestrator | Sunday 05 April 2026 00:51:07 +0000 (0:00:00.862) 0:04:19.680 ********** 2026-04-05 00:58:11.090257 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:58:11.090261 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:58:11.090266 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:58:11.090270 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mgrs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 00:58:11.090275 | orchestrator | 2026-04-05 00:58:11.090280 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called before restart] ******** 2026-04-05 00:58:11.090284 | orchestrator | Sunday 05 April 2026 00:51:09 +0000 (0:00:01.376) 0:04:21.057 ********** 2026-04-05 00:58:11.090289 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:58:11.090293 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:58:11.090298 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:58:11.090302 | orchestrator | 2026-04-05 00:58:11.090307 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mgr restart script] *********************** 2026-04-05 00:58:11.090312 | orchestrator | Sunday 05 April 2026 00:51:09 +0000 (0:00:00.345) 0:04:21.402 ********** 2026-04-05 00:58:11.090316 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:58:11.090321 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:58:11.090325 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:58:11.090330 | orchestrator | 2026-04-05 00:58:11.090334 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mgr daemon(s)] ******************** 2026-04-05 00:58:11.090339 | orchestrator | Sunday 05 April 2026 00:51:11 +0000 (0:00:01.828) 0:04:23.231 ********** 2026-04-05 00:58:11.090343 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-04-05 00:58:11.090348 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-04-05 00:58:11.090352 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-04-05 00:58:11.090357 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:58:11.090367 | orchestrator | 2026-04-05 00:58:11.090372 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called after restart] ********* 2026-04-05 00:58:11.090377 | orchestrator | Sunday 05 April 2026 00:51:12 +0000 (0:00:00.687) 0:04:23.919 ********** 2026-04-05 00:58:11.090381 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:58:11.090386 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:58:11.090390 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:58:11.090395 | orchestrator | 2026-04-05 00:58:11.090399 | orchestrator | PLAY [Apply role ceph-mon] ***************************************************** 2026-04-05 00:58:11.090403 | orchestrator | 2026-04-05 00:58:11.090408 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-04-05 00:58:11.090413 | orchestrator | Sunday 05 April 2026 00:51:12 +0000 (0:00:00.613) 0:04:24.533 ********** 2026-04-05 00:58:11.090417 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 00:58:11.090422 | orchestrator | 2026-04-05 00:58:11.090426 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-04-05 00:58:11.090431 | orchestrator | Sunday 05 April 2026 00:51:13 +0000 (0:00:00.809) 0:04:25.342 ********** 2026-04-05 00:58:11.090441 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 00:58:11.090445 | orchestrator | 2026-04-05 00:58:11.090450 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-04-05 00:58:11.090455 | orchestrator | Sunday 05 April 2026 00:51:14 +0000 (0:00:00.678) 0:04:26.020 ********** 2026-04-05 00:58:11.090459 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:58:11.090464 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:58:11.090468 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:58:11.090473 | orchestrator | 2026-04-05 00:58:11.090477 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-04-05 00:58:11.090482 | orchestrator | Sunday 05 April 2026 00:51:14 +0000 (0:00:00.733) 0:04:26.754 ********** 2026-04-05 00:58:11.090486 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:58:11.090491 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:58:11.090495 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:58:11.090500 | orchestrator | 2026-04-05 00:58:11.090505 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-04-05 00:58:11.090509 | orchestrator | Sunday 05 April 2026 00:51:15 +0000 (0:00:00.446) 0:04:27.200 ********** 2026-04-05 00:58:11.090514 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:58:11.090518 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:58:11.090523 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:58:11.090527 | orchestrator | 2026-04-05 00:58:11.090532 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-04-05 00:58:11.090536 | orchestrator | Sunday 05 April 2026 00:51:16 +0000 (0:00:00.724) 0:04:27.925 ********** 2026-04-05 00:58:11.090541 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:58:11.090545 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:58:11.090550 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:58:11.090554 | orchestrator | 2026-04-05 00:58:11.090559 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-04-05 00:58:11.090563 | orchestrator | Sunday 05 April 2026 00:51:16 +0000 (0:00:00.375) 0:04:28.301 ********** 2026-04-05 00:58:11.090568 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:58:11.090572 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:58:11.090577 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:58:11.090581 | orchestrator | 2026-04-05 00:58:11.090586 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-04-05 00:58:11.090590 | orchestrator | Sunday 05 April 2026 00:51:17 +0000 (0:00:00.682) 0:04:28.984 ********** 2026-04-05 00:58:11.090595 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:58:11.090599 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:58:11.090604 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:58:11.090608 | orchestrator | 2026-04-05 00:58:11.090617 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-04-05 00:58:11.090621 | orchestrator | Sunday 05 April 2026 00:51:17 +0000 (0:00:00.351) 0:04:29.335 ********** 2026-04-05 00:58:11.090640 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:58:11.090646 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:58:11.090650 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:58:11.090655 | orchestrator | 2026-04-05 00:58:11.090659 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-04-05 00:58:11.090664 | orchestrator | Sunday 05 April 2026 00:51:18 +0000 (0:00:00.639) 0:04:29.975 ********** 2026-04-05 00:58:11.090668 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:58:11.090673 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:58:11.090678 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:58:11.090682 | orchestrator | 2026-04-05 00:58:11.090687 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-04-05 00:58:11.090691 | orchestrator | Sunday 05 April 2026 00:51:18 +0000 (0:00:00.765) 0:04:30.741 ********** 2026-04-05 00:58:11.090696 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:58:11.090700 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:58:11.090705 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:58:11.090709 | orchestrator | 2026-04-05 00:58:11.090714 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-04-05 00:58:11.090718 | orchestrator | Sunday 05 April 2026 00:51:19 +0000 (0:00:00.892) 0:04:31.633 ********** 2026-04-05 00:58:11.090733 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:58:11.090738 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:58:11.090743 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:58:11.090747 | orchestrator | 2026-04-05 00:58:11.090752 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-04-05 00:58:11.090756 | orchestrator | Sunday 05 April 2026 00:51:20 +0000 (0:00:00.448) 0:04:32.081 ********** 2026-04-05 00:58:11.090761 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:58:11.090765 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:58:11.090770 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:58:11.090774 | orchestrator | 2026-04-05 00:58:11.090779 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-04-05 00:58:11.090784 | orchestrator | Sunday 05 April 2026 00:51:21 +0000 (0:00:01.585) 0:04:33.667 ********** 2026-04-05 00:58:11.090788 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:58:11.090793 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:58:11.090797 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:58:11.090802 | orchestrator | 2026-04-05 00:58:11.090806 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-04-05 00:58:11.090811 | orchestrator | Sunday 05 April 2026 00:51:22 +0000 (0:00:00.579) 0:04:34.247 ********** 2026-04-05 00:58:11.090815 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:58:11.090820 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:58:11.090824 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:58:11.090829 | orchestrator | 2026-04-05 00:58:11.090834 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-04-05 00:58:11.090838 | orchestrator | Sunday 05 April 2026 00:51:22 +0000 (0:00:00.449) 0:04:34.696 ********** 2026-04-05 00:58:11.090843 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:58:11.090847 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:58:11.090852 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:58:11.090856 | orchestrator | 2026-04-05 00:58:11.090861 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-04-05 00:58:11.090865 | orchestrator | Sunday 05 April 2026 00:51:23 +0000 (0:00:00.528) 0:04:35.225 ********** 2026-04-05 00:58:11.090870 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:58:11.090878 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:58:11.090883 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:58:11.090887 | orchestrator | 2026-04-05 00:58:11.090892 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-04-05 00:58:11.090900 | orchestrator | Sunday 05 April 2026 00:51:24 +0000 (0:00:00.666) 0:04:35.892 ********** 2026-04-05 00:58:11.090905 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:58:11.090909 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:58:11.090914 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:58:11.090918 | orchestrator | 2026-04-05 00:58:11.090923 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-04-05 00:58:11.090927 | orchestrator | Sunday 05 April 2026 00:51:24 +0000 (0:00:00.369) 0:04:36.261 ********** 2026-04-05 00:58:11.090932 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:58:11.090936 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:58:11.090941 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:58:11.090946 | orchestrator | 2026-04-05 00:58:11.090950 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-04-05 00:58:11.090955 | orchestrator | Sunday 05 April 2026 00:51:24 +0000 (0:00:00.360) 0:04:36.621 ********** 2026-04-05 00:58:11.090959 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:58:11.090964 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:58:11.090968 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:58:11.090973 | orchestrator | 2026-04-05 00:58:11.090977 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-04-05 00:58:11.090982 | orchestrator | Sunday 05 April 2026 00:51:25 +0000 (0:00:00.539) 0:04:37.161 ********** 2026-04-05 00:58:11.090986 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:58:11.090991 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:58:11.090995 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:58:11.091000 | orchestrator | 2026-04-05 00:58:11.091004 | orchestrator | TASK [ceph-mon : Set_fact container_exec_cmd] ********************************** 2026-04-05 00:58:11.091009 | orchestrator | Sunday 05 April 2026 00:51:26 +0000 (0:00:01.429) 0:04:38.591 ********** 2026-04-05 00:58:11.091013 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:58:11.091018 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:58:11.091022 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:58:11.091027 | orchestrator | 2026-04-05 00:58:11.091031 | orchestrator | TASK [ceph-mon : Include deploy_monitors.yml] ********************************** 2026-04-05 00:58:11.091036 | orchestrator | Sunday 05 April 2026 00:51:27 +0000 (0:00:00.557) 0:04:39.148 ********** 2026-04-05 00:58:11.091040 | orchestrator | included: /ansible/roles/ceph-mon/tasks/deploy_monitors.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 00:58:11.091045 | orchestrator | 2026-04-05 00:58:11.091050 | orchestrator | TASK [ceph-mon : Check if monitor initial keyring already exists] ************** 2026-04-05 00:58:11.091054 | orchestrator | Sunday 05 April 2026 00:51:28 +0000 (0:00:01.582) 0:04:40.731 ********** 2026-04-05 00:58:11.091059 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:58:11.091063 | orchestrator | 2026-04-05 00:58:11.091085 | orchestrator | TASK [ceph-mon : Generate monitor initial keyring] ***************************** 2026-04-05 00:58:11.091090 | orchestrator | Sunday 05 April 2026 00:51:29 +0000 (0:00:00.488) 0:04:41.219 ********** 2026-04-05 00:58:11.091095 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-04-05 00:58:11.091099 | orchestrator | 2026-04-05 00:58:11.091117 | orchestrator | TASK [ceph-mon : Set_fact _initial_mon_key_success] **************************** 2026-04-05 00:58:11.091122 | orchestrator | Sunday 05 April 2026 00:51:30 +0000 (0:00:01.530) 0:04:42.750 ********** 2026-04-05 00:58:11.091126 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:58:11.091131 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:58:11.091136 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:58:11.091140 | orchestrator | 2026-04-05 00:58:11.091145 | orchestrator | TASK [ceph-mon : Get initial keyring when it already exists] ******************* 2026-04-05 00:58:11.091149 | orchestrator | Sunday 05 April 2026 00:51:31 +0000 (0:00:00.399) 0:04:43.150 ********** 2026-04-05 00:58:11.091154 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:58:11.091158 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:58:11.091163 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:58:11.091167 | orchestrator | 2026-04-05 00:58:11.091172 | orchestrator | TASK [ceph-mon : Create monitor initial keyring] ******************************* 2026-04-05 00:58:11.091181 | orchestrator | Sunday 05 April 2026 00:51:32 +0000 (0:00:00.697) 0:04:43.847 ********** 2026-04-05 00:58:11.091186 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:58:11.091190 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:58:11.091195 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:58:11.091199 | orchestrator | 2026-04-05 00:58:11.091204 | orchestrator | TASK [ceph-mon : Copy the initial key in /etc/ceph (for containers)] *********** 2026-04-05 00:58:11.091208 | orchestrator | Sunday 05 April 2026 00:51:33 +0000 (0:00:01.834) 0:04:45.682 ********** 2026-04-05 00:58:11.091213 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:58:11.091217 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:58:11.091222 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:58:11.091226 | orchestrator | 2026-04-05 00:58:11.091231 | orchestrator | TASK [ceph-mon : Create monitor directory] ************************************* 2026-04-05 00:58:11.091235 | orchestrator | Sunday 05 April 2026 00:51:35 +0000 (0:00:01.125) 0:04:46.808 ********** 2026-04-05 00:58:11.091240 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:58:11.091244 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:58:11.091249 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:58:11.091253 | orchestrator | 2026-04-05 00:58:11.091258 | orchestrator | TASK [ceph-mon : Recursively fix ownership of monitor directory] *************** 2026-04-05 00:58:11.091262 | orchestrator | Sunday 05 April 2026 00:51:35 +0000 (0:00:00.814) 0:04:47.622 ********** 2026-04-05 00:58:11.091267 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:58:11.091271 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:58:11.091276 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:58:11.091280 | orchestrator | 2026-04-05 00:58:11.091285 | orchestrator | TASK [ceph-mon : Create admin keyring] ***************************************** 2026-04-05 00:58:11.091289 | orchestrator | Sunday 05 April 2026 00:51:36 +0000 (0:00:01.051) 0:04:48.674 ********** 2026-04-05 00:58:11.091294 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:58:11.091298 | orchestrator | 2026-04-05 00:58:11.091303 | orchestrator | TASK [ceph-mon : Slurp admin keyring] ****************************************** 2026-04-05 00:58:11.091307 | orchestrator | Sunday 05 April 2026 00:51:38 +0000 (0:00:01.383) 0:04:50.057 ********** 2026-04-05 00:58:11.091312 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:58:11.091316 | orchestrator | 2026-04-05 00:58:11.091324 | orchestrator | TASK [ceph-mon : Copy admin keyring over to mons] ****************************** 2026-04-05 00:58:11.091329 | orchestrator | Sunday 05 April 2026 00:51:39 +0000 (0:00:00.778) 0:04:50.836 ********** 2026-04-05 00:58:11.091333 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-04-05 00:58:11.091338 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-05 00:58:11.091342 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-05 00:58:11.091347 | orchestrator | changed: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-04-05 00:58:11.091351 | orchestrator | ok: [testbed-node-1] => (item=None) 2026-04-05 00:58:11.091356 | orchestrator | ok: [testbed-node-2 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-04-05 00:58:11.091360 | orchestrator | changed: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-04-05 00:58:11.091365 | orchestrator | changed: [testbed-node-0 -> {{ item }}] 2026-04-05 00:58:11.091369 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-04-05 00:58:11.091374 | orchestrator | ok: [testbed-node-1 -> {{ item }}] 2026-04-05 00:58:11.091378 | orchestrator | ok: [testbed-node-2] => (item=None) 2026-04-05 00:58:11.091383 | orchestrator | ok: [testbed-node-2 -> {{ item }}] 2026-04-05 00:58:11.091387 | orchestrator | 2026-04-05 00:58:11.091392 | orchestrator | TASK [ceph-mon : Import admin keyring into mon keyring] ************************ 2026-04-05 00:58:11.091396 | orchestrator | Sunday 05 April 2026 00:51:43 +0000 (0:00:04.308) 0:04:55.145 ********** 2026-04-05 00:58:11.091401 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:58:11.091406 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:58:11.091410 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:58:11.091420 | orchestrator | 2026-04-05 00:58:11.091424 | orchestrator | TASK [ceph-mon : Set_fact ceph-mon container command] ************************** 2026-04-05 00:58:11.091429 | orchestrator | Sunday 05 April 2026 00:51:45 +0000 (0:00:02.287) 0:04:57.433 ********** 2026-04-05 00:58:11.091433 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:58:11.091438 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:58:11.091442 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:58:11.091447 | orchestrator | 2026-04-05 00:58:11.091451 | orchestrator | TASK [ceph-mon : Set_fact monmaptool container command] ************************ 2026-04-05 00:58:11.091456 | orchestrator | Sunday 05 April 2026 00:51:46 +0000 (0:00:00.470) 0:04:57.903 ********** 2026-04-05 00:58:11.091460 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:58:11.091465 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:58:11.091469 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:58:11.091474 | orchestrator | 2026-04-05 00:58:11.091478 | orchestrator | TASK [ceph-mon : Generate initial monmap] ************************************** 2026-04-05 00:58:11.091483 | orchestrator | Sunday 05 April 2026 00:51:46 +0000 (0:00:00.450) 0:04:58.353 ********** 2026-04-05 00:58:11.091488 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:58:11.091508 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:58:11.091514 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:58:11.091519 | orchestrator | 2026-04-05 00:58:11.091523 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs with keyring] ******************************* 2026-04-05 00:58:11.091528 | orchestrator | Sunday 05 April 2026 00:51:48 +0000 (0:00:02.142) 0:05:00.496 ********** 2026-04-05 00:58:11.091532 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:58:11.091537 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:58:11.091541 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:58:11.091546 | orchestrator | 2026-04-05 00:58:11.091550 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs without keyring] **************************** 2026-04-05 00:58:11.091555 | orchestrator | Sunday 05 April 2026 00:51:50 +0000 (0:00:01.886) 0:05:02.382 ********** 2026-04-05 00:58:11.091560 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:58:11.091564 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:58:11.091569 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:58:11.091573 | orchestrator | 2026-04-05 00:58:11.091578 | orchestrator | TASK [ceph-mon : Include start_monitor.yml] ************************************ 2026-04-05 00:58:11.091582 | orchestrator | Sunday 05 April 2026 00:51:50 +0000 (0:00:00.364) 0:05:02.746 ********** 2026-04-05 00:58:11.091587 | orchestrator | included: /ansible/roles/ceph-mon/tasks/start_monitor.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 00:58:11.091599 | orchestrator | 2026-04-05 00:58:11.091604 | orchestrator | TASK [ceph-mon : Ensure systemd service override directory exists] ************* 2026-04-05 00:58:11.091609 | orchestrator | Sunday 05 April 2026 00:51:51 +0000 (0:00:00.599) 0:05:03.346 ********** 2026-04-05 00:58:11.091613 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:58:11.091618 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:58:11.091623 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:58:11.091627 | orchestrator | 2026-04-05 00:58:11.091638 | orchestrator | TASK [ceph-mon : Add ceph-mon systemd service overrides] *********************** 2026-04-05 00:58:11.091643 | orchestrator | Sunday 05 April 2026 00:51:52 +0000 (0:00:00.665) 0:05:04.011 ********** 2026-04-05 00:58:11.091647 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:58:11.091652 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:58:11.091656 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:58:11.091661 | orchestrator | 2026-04-05 00:58:11.091665 | orchestrator | TASK [ceph-mon : Include_tasks systemd.yml] ************************************ 2026-04-05 00:58:11.091670 | orchestrator | Sunday 05 April 2026 00:51:52 +0000 (0:00:00.375) 0:05:04.387 ********** 2026-04-05 00:58:11.091674 | orchestrator | included: /ansible/roles/ceph-mon/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 00:58:11.091679 | orchestrator | 2026-04-05 00:58:11.091684 | orchestrator | TASK [ceph-mon : Generate systemd unit file for mon container] ***************** 2026-04-05 00:58:11.091688 | orchestrator | Sunday 05 April 2026 00:51:53 +0000 (0:00:00.734) 0:05:05.121 ********** 2026-04-05 00:58:11.091697 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:58:11.091701 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:58:11.091706 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:58:11.091710 | orchestrator | 2026-04-05 00:58:11.091715 | orchestrator | TASK [ceph-mon : Generate systemd ceph-mon target file] ************************ 2026-04-05 00:58:11.091720 | orchestrator | Sunday 05 April 2026 00:51:55 +0000 (0:00:02.363) 0:05:07.485 ********** 2026-04-05 00:58:11.091727 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:58:11.091732 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:58:11.091736 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:58:11.091741 | orchestrator | 2026-04-05 00:58:11.091745 | orchestrator | TASK [ceph-mon : Enable ceph-mon.target] *************************************** 2026-04-05 00:58:11.091750 | orchestrator | Sunday 05 April 2026 00:51:56 +0000 (0:00:01.255) 0:05:08.740 ********** 2026-04-05 00:58:11.091754 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:58:11.091759 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:58:11.091763 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:58:11.091768 | orchestrator | 2026-04-05 00:58:11.091772 | orchestrator | TASK [ceph-mon : Start the monitor service] ************************************ 2026-04-05 00:58:11.091777 | orchestrator | Sunday 05 April 2026 00:51:58 +0000 (0:00:01.821) 0:05:10.561 ********** 2026-04-05 00:58:11.091781 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:58:11.091786 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:58:11.091790 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:58:11.091795 | orchestrator | 2026-04-05 00:58:11.091799 | orchestrator | TASK [ceph-mon : Include_tasks ceph_keys.yml] ********************************** 2026-04-05 00:58:11.091804 | orchestrator | Sunday 05 April 2026 00:52:01 +0000 (0:00:02.250) 0:05:12.812 ********** 2026-04-05 00:58:11.091808 | orchestrator | included: /ansible/roles/ceph-mon/tasks/ceph_keys.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 00:58:11.091813 | orchestrator | 2026-04-05 00:58:11.091817 | orchestrator | TASK [ceph-mon : Waiting for the monitor(s) to form the quorum...] ************* 2026-04-05 00:58:11.091822 | orchestrator | Sunday 05 April 2026 00:52:01 +0000 (0:00:00.807) 0:05:13.619 ********** 2026-04-05 00:58:11.091826 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for the monitor(s) to form the quorum... (10 retries left). 2026-04-05 00:58:11.091831 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:58:11.091835 | orchestrator | 2026-04-05 00:58:11.091840 | orchestrator | TASK [ceph-mon : Fetch ceph initial keys] ************************************** 2026-04-05 00:58:11.091844 | orchestrator | Sunday 05 April 2026 00:52:23 +0000 (0:00:21.596) 0:05:35.216 ********** 2026-04-05 00:58:11.091849 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:58:11.091853 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:58:11.091858 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:58:11.091863 | orchestrator | 2026-04-05 00:58:11.091867 | orchestrator | TASK [ceph-mon : Include secure_cluster.yml] *********************************** 2026-04-05 00:58:11.091872 | orchestrator | Sunday 05 April 2026 00:52:30 +0000 (0:00:06.661) 0:05:41.877 ********** 2026-04-05 00:58:11.091876 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:58:11.091881 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:58:11.091885 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:58:11.091889 | orchestrator | 2026-04-05 00:58:11.091894 | orchestrator | TASK [ceph-mon : Set cluster configs] ****************************************** 2026-04-05 00:58:11.091915 | orchestrator | Sunday 05 April 2026 00:52:30 +0000 (0:00:00.363) 0:05:42.241 ********** 2026-04-05 00:58:11.091923 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__49b425dac44d65c674961c872ce66077e2dce2ae'}}, {'key': 'public_network', 'value': '192.168.16.0/20'}]) 2026-04-05 00:58:11.091929 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__49b425dac44d65c674961c872ce66077e2dce2ae'}}, {'key': 'cluster_network', 'value': '192.168.16.0/20'}]) 2026-04-05 00:58:11.091941 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__49b425dac44d65c674961c872ce66077e2dce2ae'}}, {'key': 'osd_pool_default_crush_rule', 'value': -1}]) 2026-04-05 00:58:11.091947 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__49b425dac44d65c674961c872ce66077e2dce2ae'}}, {'key': 'ms_bind_ipv6', 'value': 'False'}]) 2026-04-05 00:58:11.091952 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__49b425dac44d65c674961c872ce66077e2dce2ae'}}, {'key': 'ms_bind_ipv4', 'value': 'True'}]) 2026-04-05 00:58:11.091960 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__49b425dac44d65c674961c872ce66077e2dce2ae'}}, {'key': 'osd_crush_chooseleaf_type', 'value': '__omit_place_holder__49b425dac44d65c674961c872ce66077e2dce2ae'}])  2026-04-05 00:58:11.091967 | orchestrator | 2026-04-05 00:58:11.091972 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-04-05 00:58:11.091976 | orchestrator | Sunday 05 April 2026 00:52:42 +0000 (0:00:11.654) 0:05:53.895 ********** 2026-04-05 00:58:11.091981 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:58:11.091985 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:58:11.091990 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:58:11.091995 | orchestrator | 2026-04-05 00:58:11.091999 | orchestrator | RUNNING HANDLER [ceph-handler : Mons handler] ********************************** 2026-04-05 00:58:11.092004 | orchestrator | Sunday 05 April 2026 00:52:42 +0000 (0:00:00.402) 0:05:54.297 ********** 2026-04-05 00:58:11.092008 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mons.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 00:58:11.092013 | orchestrator | 2026-04-05 00:58:11.092018 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called before restart] ******** 2026-04-05 00:58:11.092022 | orchestrator | Sunday 05 April 2026 00:52:43 +0000 (0:00:00.864) 0:05:55.161 ********** 2026-04-05 00:58:11.092027 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:58:11.092031 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:58:11.092036 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:58:11.092041 | orchestrator | 2026-04-05 00:58:11.092045 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mon restart script] *********************** 2026-04-05 00:58:11.092050 | orchestrator | Sunday 05 April 2026 00:52:43 +0000 (0:00:00.422) 0:05:55.583 ********** 2026-04-05 00:58:11.092054 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:58:11.092059 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:58:11.092063 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:58:11.092068 | orchestrator | 2026-04-05 00:58:11.092072 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mon daemon(s)] ******************** 2026-04-05 00:58:11.092077 | orchestrator | Sunday 05 April 2026 00:52:44 +0000 (0:00:00.359) 0:05:55.943 ********** 2026-04-05 00:58:11.092081 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-04-05 00:58:11.092090 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-04-05 00:58:11.092094 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-04-05 00:58:11.092099 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:58:11.092131 | orchestrator | 2026-04-05 00:58:11.092136 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called after restart] ********* 2026-04-05 00:58:11.092141 | orchestrator | Sunday 05 April 2026 00:52:44 +0000 (0:00:00.661) 0:05:56.605 ********** 2026-04-05 00:58:11.092145 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:58:11.092150 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:58:11.092172 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:58:11.092178 | orchestrator | 2026-04-05 00:58:11.092183 | orchestrator | PLAY [Apply role ceph-mgr] ***************************************************** 2026-04-05 00:58:11.092187 | orchestrator | 2026-04-05 00:58:11.092192 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-04-05 00:58:11.092196 | orchestrator | Sunday 05 April 2026 00:52:45 +0000 (0:00:00.861) 0:05:57.466 ********** 2026-04-05 00:58:11.092201 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 00:58:11.092205 | orchestrator | 2026-04-05 00:58:11.092210 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-04-05 00:58:11.092214 | orchestrator | Sunday 05 April 2026 00:52:46 +0000 (0:00:00.555) 0:05:58.022 ********** 2026-04-05 00:58:11.092219 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 00:58:11.092223 | orchestrator | 2026-04-05 00:58:11.092228 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-04-05 00:58:11.092232 | orchestrator | Sunday 05 April 2026 00:52:46 +0000 (0:00:00.555) 0:05:58.577 ********** 2026-04-05 00:58:11.092237 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:58:11.092241 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:58:11.092246 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:58:11.092250 | orchestrator | 2026-04-05 00:58:11.092255 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-04-05 00:58:11.092259 | orchestrator | Sunday 05 April 2026 00:52:47 +0000 (0:00:00.968) 0:05:59.545 ********** 2026-04-05 00:58:11.092264 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:58:11.092268 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:58:11.092273 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:58:11.092277 | orchestrator | 2026-04-05 00:58:11.092282 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-04-05 00:58:11.092286 | orchestrator | Sunday 05 April 2026 00:52:48 +0000 (0:00:00.307) 0:05:59.853 ********** 2026-04-05 00:58:11.092291 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:58:11.092295 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:58:11.092300 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:58:11.092304 | orchestrator | 2026-04-05 00:58:11.092309 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-04-05 00:58:11.092313 | orchestrator | Sunday 05 April 2026 00:52:48 +0000 (0:00:00.288) 0:06:00.141 ********** 2026-04-05 00:58:11.092317 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:58:11.092322 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:58:11.092326 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:58:11.092331 | orchestrator | 2026-04-05 00:58:11.092335 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-04-05 00:58:11.092340 | orchestrator | Sunday 05 April 2026 00:52:48 +0000 (0:00:00.333) 0:06:00.475 ********** 2026-04-05 00:58:11.092344 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:58:11.092349 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:58:11.092353 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:58:11.092358 | orchestrator | 2026-04-05 00:58:11.092362 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-04-05 00:58:11.092371 | orchestrator | Sunday 05 April 2026 00:52:49 +0000 (0:00:01.204) 0:06:01.679 ********** 2026-04-05 00:58:11.092380 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:58:11.092384 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:58:11.092389 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:58:11.092393 | orchestrator | 2026-04-05 00:58:11.092398 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-04-05 00:58:11.092402 | orchestrator | Sunday 05 April 2026 00:52:50 +0000 (0:00:00.332) 0:06:02.011 ********** 2026-04-05 00:58:11.092407 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:58:11.092411 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:58:11.092416 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:58:11.092420 | orchestrator | 2026-04-05 00:58:11.092424 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-04-05 00:58:11.092429 | orchestrator | Sunday 05 April 2026 00:52:50 +0000 (0:00:00.335) 0:06:02.347 ********** 2026-04-05 00:58:11.092433 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:58:11.092438 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:58:11.092442 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:58:11.092447 | orchestrator | 2026-04-05 00:58:11.092451 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-04-05 00:58:11.092456 | orchestrator | Sunday 05 April 2026 00:52:51 +0000 (0:00:00.776) 0:06:03.123 ********** 2026-04-05 00:58:11.092460 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:58:11.092465 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:58:11.092469 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:58:11.092474 | orchestrator | 2026-04-05 00:58:11.092478 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-04-05 00:58:11.092483 | orchestrator | Sunday 05 April 2026 00:52:52 +0000 (0:00:01.163) 0:06:04.287 ********** 2026-04-05 00:58:11.092487 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:58:11.092492 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:58:11.092496 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:58:11.092501 | orchestrator | 2026-04-05 00:58:11.092505 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-04-05 00:58:11.092510 | orchestrator | Sunday 05 April 2026 00:52:52 +0000 (0:00:00.323) 0:06:04.610 ********** 2026-04-05 00:58:11.092514 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:58:11.092519 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:58:11.092523 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:58:11.092528 | orchestrator | 2026-04-05 00:58:11.092532 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-04-05 00:58:11.092537 | orchestrator | Sunday 05 April 2026 00:52:53 +0000 (0:00:00.365) 0:06:04.976 ********** 2026-04-05 00:58:11.092541 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:58:11.092546 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:58:11.092550 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:58:11.092555 | orchestrator | 2026-04-05 00:58:11.092559 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-04-05 00:58:11.092581 | orchestrator | Sunday 05 April 2026 00:52:53 +0000 (0:00:00.331) 0:06:05.308 ********** 2026-04-05 00:58:11.092587 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:58:11.092591 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:58:11.092596 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:58:11.092600 | orchestrator | 2026-04-05 00:58:11.092605 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-04-05 00:58:11.092609 | orchestrator | Sunday 05 April 2026 00:52:54 +0000 (0:00:00.588) 0:06:05.897 ********** 2026-04-05 00:58:11.092614 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:58:11.092618 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:58:11.092623 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:58:11.092627 | orchestrator | 2026-04-05 00:58:11.092632 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-04-05 00:58:11.092636 | orchestrator | Sunday 05 April 2026 00:52:54 +0000 (0:00:00.337) 0:06:06.234 ********** 2026-04-05 00:58:11.092641 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:58:11.092649 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:58:11.092653 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:58:11.092657 | orchestrator | 2026-04-05 00:58:11.092662 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-04-05 00:58:11.092666 | orchestrator | Sunday 05 April 2026 00:52:54 +0000 (0:00:00.326) 0:06:06.560 ********** 2026-04-05 00:58:11.092670 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:58:11.092674 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:58:11.092678 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:58:11.092682 | orchestrator | 2026-04-05 00:58:11.092686 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-04-05 00:58:11.092690 | orchestrator | Sunday 05 April 2026 00:52:55 +0000 (0:00:00.332) 0:06:06.893 ********** 2026-04-05 00:58:11.092694 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:58:11.092698 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:58:11.092702 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:58:11.092706 | orchestrator | 2026-04-05 00:58:11.092710 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-04-05 00:58:11.092714 | orchestrator | Sunday 05 April 2026 00:52:55 +0000 (0:00:00.338) 0:06:07.231 ********** 2026-04-05 00:58:11.092718 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:58:11.092722 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:58:11.092726 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:58:11.092730 | orchestrator | 2026-04-05 00:58:11.092735 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-04-05 00:58:11.092739 | orchestrator | Sunday 05 April 2026 00:52:56 +0000 (0:00:00.744) 0:06:07.976 ********** 2026-04-05 00:58:11.092743 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:58:11.092747 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:58:11.092751 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:58:11.092755 | orchestrator | 2026-04-05 00:58:11.092759 | orchestrator | TASK [ceph-mgr : Set_fact container_exec_cmd] ********************************** 2026-04-05 00:58:11.092763 | orchestrator | Sunday 05 April 2026 00:52:56 +0000 (0:00:00.581) 0:06:08.557 ********** 2026-04-05 00:58:11.092767 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-04-05 00:58:11.092771 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-05 00:58:11.092775 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-05 00:58:11.092779 | orchestrator | 2026-04-05 00:58:11.092788 | orchestrator | TASK [ceph-mgr : Include common.yml] ******************************************* 2026-04-05 00:58:11.092792 | orchestrator | Sunday 05 April 2026 00:52:57 +0000 (0:00:00.992) 0:06:09.550 ********** 2026-04-05 00:58:11.092797 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/common.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 00:58:11.092801 | orchestrator | 2026-04-05 00:58:11.092805 | orchestrator | TASK [ceph-mgr : Create mgr directory] ***************************************** 2026-04-05 00:58:11.092809 | orchestrator | Sunday 05 April 2026 00:52:58 +0000 (0:00:01.144) 0:06:10.695 ********** 2026-04-05 00:58:11.092813 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:58:11.092817 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:58:11.092821 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:58:11.092825 | orchestrator | 2026-04-05 00:58:11.092829 | orchestrator | TASK [ceph-mgr : Fetch ceph mgr keyring] *************************************** 2026-04-05 00:58:11.092833 | orchestrator | Sunday 05 April 2026 00:52:59 +0000 (0:00:00.839) 0:06:11.534 ********** 2026-04-05 00:58:11.092837 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:58:11.092841 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:58:11.092845 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:58:11.092849 | orchestrator | 2026-04-05 00:58:11.092853 | orchestrator | TASK [ceph-mgr : Create ceph mgr keyring(s) on a mon node] ********************* 2026-04-05 00:58:11.092857 | orchestrator | Sunday 05 April 2026 00:53:00 +0000 (0:00:00.415) 0:06:11.950 ********** 2026-04-05 00:58:11.092861 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-04-05 00:58:11.092869 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-04-05 00:58:11.092873 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-04-05 00:58:11.092878 | orchestrator | changed: [testbed-node-0 -> {{ groups[mon_group_name][0] }}] 2026-04-05 00:58:11.092882 | orchestrator | 2026-04-05 00:58:11.092886 | orchestrator | TASK [ceph-mgr : Set_fact _mgr_keys] ******************************************* 2026-04-05 00:58:11.092890 | orchestrator | Sunday 05 April 2026 00:53:08 +0000 (0:00:08.807) 0:06:20.758 ********** 2026-04-05 00:58:11.092894 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:58:11.092898 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:58:11.092902 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:58:11.092906 | orchestrator | 2026-04-05 00:58:11.092910 | orchestrator | TASK [ceph-mgr : Get keys from monitors] *************************************** 2026-04-05 00:58:11.092914 | orchestrator | Sunday 05 April 2026 00:53:09 +0000 (0:00:00.608) 0:06:21.366 ********** 2026-04-05 00:58:11.092918 | orchestrator | skipping: [testbed-node-0] => (item=None)  2026-04-05 00:58:11.092922 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-04-05 00:58:11.092926 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-04-05 00:58:11.092930 | orchestrator | ok: [testbed-node-0] => (item=None) 2026-04-05 00:58:11.092934 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-05 00:58:11.092953 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-05 00:58:11.092958 | orchestrator | 2026-04-05 00:58:11.092962 | orchestrator | TASK [ceph-mgr : Copy ceph key(s) if needed] *********************************** 2026-04-05 00:58:11.092966 | orchestrator | Sunday 05 April 2026 00:53:11 +0000 (0:00:01.855) 0:06:23.222 ********** 2026-04-05 00:58:11.092970 | orchestrator | skipping: [testbed-node-0] => (item=None)  2026-04-05 00:58:11.092974 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-04-05 00:58:11.092978 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-04-05 00:58:11.092982 | orchestrator | changed: [testbed-node-1] => (item=None) 2026-04-05 00:58:11.092986 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-04-05 00:58:11.092991 | orchestrator | changed: [testbed-node-2] => (item=None) 2026-04-05 00:58:11.092995 | orchestrator | 2026-04-05 00:58:11.092999 | orchestrator | TASK [ceph-mgr : Set mgr key permissions] ************************************** 2026-04-05 00:58:11.093003 | orchestrator | Sunday 05 April 2026 00:53:12 +0000 (0:00:01.442) 0:06:24.664 ********** 2026-04-05 00:58:11.093007 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:58:11.093011 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:58:11.093015 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:58:11.093019 | orchestrator | 2026-04-05 00:58:11.093023 | orchestrator | TASK [ceph-mgr : Append dashboard modules to ceph_mgr_modules] ***************** 2026-04-05 00:58:11.093027 | orchestrator | Sunday 05 April 2026 00:53:13 +0000 (0:00:00.734) 0:06:25.399 ********** 2026-04-05 00:58:11.093031 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:58:11.093035 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:58:11.093040 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:58:11.093044 | orchestrator | 2026-04-05 00:58:11.093048 | orchestrator | TASK [ceph-mgr : Include pre_requisite.yml] ************************************ 2026-04-05 00:58:11.093052 | orchestrator | Sunday 05 April 2026 00:53:14 +0000 (0:00:00.555) 0:06:25.954 ********** 2026-04-05 00:58:11.093056 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:58:11.093060 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:58:11.093064 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:58:11.093068 | orchestrator | 2026-04-05 00:58:11.093072 | orchestrator | TASK [ceph-mgr : Include start_mgr.yml] **************************************** 2026-04-05 00:58:11.093076 | orchestrator | Sunday 05 April 2026 00:53:14 +0000 (0:00:00.306) 0:06:26.261 ********** 2026-04-05 00:58:11.093080 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/start_mgr.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 00:58:11.093084 | orchestrator | 2026-04-05 00:58:11.093088 | orchestrator | TASK [ceph-mgr : Ensure systemd service override directory exists] ************* 2026-04-05 00:58:11.093096 | orchestrator | Sunday 05 April 2026 00:53:15 +0000 (0:00:00.524) 0:06:26.786 ********** 2026-04-05 00:58:11.093111 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:58:11.093116 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:58:11.093120 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:58:11.093124 | orchestrator | 2026-04-05 00:58:11.093128 | orchestrator | TASK [ceph-mgr : Add ceph-mgr systemd service overrides] *********************** 2026-04-05 00:58:11.093132 | orchestrator | Sunday 05 April 2026 00:53:15 +0000 (0:00:00.331) 0:06:27.117 ********** 2026-04-05 00:58:11.093136 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:58:11.093141 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:58:11.093145 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:58:11.093149 | orchestrator | 2026-04-05 00:58:11.093156 | orchestrator | TASK [ceph-mgr : Include_tasks systemd.yml] ************************************ 2026-04-05 00:58:11.093160 | orchestrator | Sunday 05 April 2026 00:53:15 +0000 (0:00:00.606) 0:06:27.724 ********** 2026-04-05 00:58:11.093164 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 00:58:11.093168 | orchestrator | 2026-04-05 00:58:11.093173 | orchestrator | TASK [ceph-mgr : Generate systemd unit file] *********************************** 2026-04-05 00:58:11.093177 | orchestrator | Sunday 05 April 2026 00:53:16 +0000 (0:00:00.533) 0:06:28.257 ********** 2026-04-05 00:58:11.093181 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:58:11.093185 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:58:11.093189 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:58:11.093193 | orchestrator | 2026-04-05 00:58:11.093197 | orchestrator | TASK [ceph-mgr : Generate systemd ceph-mgr target file] ************************ 2026-04-05 00:58:11.093201 | orchestrator | Sunday 05 April 2026 00:53:17 +0000 (0:00:01.298) 0:06:29.556 ********** 2026-04-05 00:58:11.093205 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:58:11.093209 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:58:11.093213 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:58:11.093217 | orchestrator | 2026-04-05 00:58:11.093221 | orchestrator | TASK [ceph-mgr : Enable ceph-mgr.target] *************************************** 2026-04-05 00:58:11.093226 | orchestrator | Sunday 05 April 2026 00:53:19 +0000 (0:00:01.603) 0:06:31.159 ********** 2026-04-05 00:58:11.093230 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:58:11.093234 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:58:11.093238 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:58:11.093242 | orchestrator | 2026-04-05 00:58:11.093246 | orchestrator | TASK [ceph-mgr : Systemd start mgr] ******************************************** 2026-04-05 00:58:11.093250 | orchestrator | Sunday 05 April 2026 00:53:21 +0000 (0:00:01.987) 0:06:33.146 ********** 2026-04-05 00:58:11.093254 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:58:11.093258 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:58:11.093262 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:58:11.093266 | orchestrator | 2026-04-05 00:58:11.093270 | orchestrator | TASK [ceph-mgr : Include mgr_modules.yml] ************************************** 2026-04-05 00:58:11.093275 | orchestrator | Sunday 05 April 2026 00:53:23 +0000 (0:00:02.090) 0:06:35.237 ********** 2026-04-05 00:58:11.093279 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:58:11.093283 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:58:11.093287 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/mgr_modules.yml for testbed-node-2 2026-04-05 00:58:11.093291 | orchestrator | 2026-04-05 00:58:11.093295 | orchestrator | TASK [ceph-mgr : Wait for all mgr to be up] ************************************ 2026-04-05 00:58:11.093299 | orchestrator | Sunday 05 April 2026 00:53:23 +0000 (0:00:00.395) 0:06:35.633 ********** 2026-04-05 00:58:11.093318 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (30 retries left). 2026-04-05 00:58:11.093323 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (29 retries left). 2026-04-05 00:58:11.093327 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2026-04-05 00:58:11.093331 | orchestrator | 2026-04-05 00:58:11.093340 | orchestrator | TASK [ceph-mgr : Get enabled modules from ceph-mgr] **************************** 2026-04-05 00:58:11.093344 | orchestrator | Sunday 05 April 2026 00:53:37 +0000 (0:00:13.602) 0:06:49.235 ********** 2026-04-05 00:58:11.093348 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2026-04-05 00:58:11.093352 | orchestrator | 2026-04-05 00:58:11.093356 | orchestrator | TASK [ceph-mgr : Set _ceph_mgr_modules fact (convert _ceph_mgr_modules.stdout to a dict)] *** 2026-04-05 00:58:11.093360 | orchestrator | Sunday 05 April 2026 00:53:38 +0000 (0:00:01.382) 0:06:50.618 ********** 2026-04-05 00:58:11.093364 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:58:11.093368 | orchestrator | 2026-04-05 00:58:11.093372 | orchestrator | TASK [ceph-mgr : Set _disabled_ceph_mgr_modules fact] ************************** 2026-04-05 00:58:11.093376 | orchestrator | Sunday 05 April 2026 00:53:39 +0000 (0:00:00.351) 0:06:50.970 ********** 2026-04-05 00:58:11.093380 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:58:11.093384 | orchestrator | 2026-04-05 00:58:11.093389 | orchestrator | TASK [ceph-mgr : Disable ceph mgr enabled modules] ***************************** 2026-04-05 00:58:11.093393 | orchestrator | Sunday 05 April 2026 00:53:39 +0000 (0:00:00.160) 0:06:51.130 ********** 2026-04-05 00:58:11.093397 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=iostat) 2026-04-05 00:58:11.093401 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=nfs) 2026-04-05 00:58:11.093405 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=restful) 2026-04-05 00:58:11.093409 | orchestrator | 2026-04-05 00:58:11.093413 | orchestrator | TASK [ceph-mgr : Add modules to ceph-mgr] ************************************** 2026-04-05 00:58:11.093417 | orchestrator | Sunday 05 April 2026 00:53:45 +0000 (0:00:06.102) 0:06:57.232 ********** 2026-04-05 00:58:11.093421 | orchestrator | skipping: [testbed-node-2] => (item=balancer)  2026-04-05 00:58:11.093425 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=dashboard) 2026-04-05 00:58:11.093430 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=prometheus) 2026-04-05 00:58:11.093434 | orchestrator | skipping: [testbed-node-2] => (item=status)  2026-04-05 00:58:11.093438 | orchestrator | 2026-04-05 00:58:11.093442 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-04-05 00:58:11.093446 | orchestrator | Sunday 05 April 2026 00:53:50 +0000 (0:00:04.737) 0:07:01.969 ********** 2026-04-05 00:58:11.093450 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:58:11.093454 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:58:11.093458 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:58:11.093462 | orchestrator | 2026-04-05 00:58:11.093466 | orchestrator | RUNNING HANDLER [ceph-handler : Mgrs handler] ********************************** 2026-04-05 00:58:11.093470 | orchestrator | Sunday 05 April 2026 00:53:51 +0000 (0:00:01.020) 0:07:02.990 ********** 2026-04-05 00:58:11.093477 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mgrs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 00:58:11.093481 | orchestrator | 2026-04-05 00:58:11.093485 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called before restart] ******** 2026-04-05 00:58:11.093489 | orchestrator | Sunday 05 April 2026 00:53:51 +0000 (0:00:00.514) 0:07:03.504 ********** 2026-04-05 00:58:11.093493 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:58:11.093497 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:58:11.093501 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:58:11.093505 | orchestrator | 2026-04-05 00:58:11.093509 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mgr restart script] *********************** 2026-04-05 00:58:11.093513 | orchestrator | Sunday 05 April 2026 00:53:52 +0000 (0:00:00.375) 0:07:03.880 ********** 2026-04-05 00:58:11.093517 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:58:11.093522 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:58:11.093526 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:58:11.093530 | orchestrator | 2026-04-05 00:58:11.093534 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mgr daemon(s)] ******************** 2026-04-05 00:58:11.093541 | orchestrator | Sunday 05 April 2026 00:53:53 +0000 (0:00:01.598) 0:07:05.478 ********** 2026-04-05 00:58:11.093545 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-04-05 00:58:11.093549 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-04-05 00:58:11.093553 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-04-05 00:58:11.093557 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:58:11.093561 | orchestrator | 2026-04-05 00:58:11.093566 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called after restart] ********* 2026-04-05 00:58:11.093570 | orchestrator | Sunday 05 April 2026 00:53:54 +0000 (0:00:00.633) 0:07:06.111 ********** 2026-04-05 00:58:11.093574 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:58:11.093578 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:58:11.093582 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:58:11.093586 | orchestrator | 2026-04-05 00:58:11.093590 | orchestrator | PLAY [Apply role ceph-osd] ***************************************************** 2026-04-05 00:58:11.093594 | orchestrator | 2026-04-05 00:58:11.093598 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-04-05 00:58:11.093602 | orchestrator | Sunday 05 April 2026 00:53:54 +0000 (0:00:00.633) 0:07:06.744 ********** 2026-04-05 00:58:11.093606 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-05 00:58:11.093610 | orchestrator | 2026-04-05 00:58:11.093614 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-04-05 00:58:11.093619 | orchestrator | Sunday 05 April 2026 00:53:55 +0000 (0:00:00.770) 0:07:07.514 ********** 2026-04-05 00:58:11.093635 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-05 00:58:11.093640 | orchestrator | 2026-04-05 00:58:11.093645 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-04-05 00:58:11.093649 | orchestrator | Sunday 05 April 2026 00:53:56 +0000 (0:00:00.549) 0:07:08.064 ********** 2026-04-05 00:58:11.093653 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:58:11.093657 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:58:11.093661 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:58:11.093665 | orchestrator | 2026-04-05 00:58:11.093669 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-04-05 00:58:11.093673 | orchestrator | Sunday 05 April 2026 00:53:56 +0000 (0:00:00.300) 0:07:08.365 ********** 2026-04-05 00:58:11.093677 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:58:11.093681 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:58:11.093685 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:58:11.093689 | orchestrator | 2026-04-05 00:58:11.093693 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-04-05 00:58:11.093698 | orchestrator | Sunday 05 April 2026 00:53:57 +0000 (0:00:01.025) 0:07:09.390 ********** 2026-04-05 00:58:11.093702 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:58:11.093706 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:58:11.093710 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:58:11.093714 | orchestrator | 2026-04-05 00:58:11.093718 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-04-05 00:58:11.093722 | orchestrator | Sunday 05 April 2026 00:53:58 +0000 (0:00:00.754) 0:07:10.144 ********** 2026-04-05 00:58:11.093726 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:58:11.093730 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:58:11.093734 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:58:11.093738 | orchestrator | 2026-04-05 00:58:11.093742 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-04-05 00:58:11.093746 | orchestrator | Sunday 05 April 2026 00:53:59 +0000 (0:00:00.799) 0:07:10.944 ********** 2026-04-05 00:58:11.093750 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:58:11.093754 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:58:11.093758 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:58:11.093763 | orchestrator | 2026-04-05 00:58:11.093767 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-04-05 00:58:11.093774 | orchestrator | Sunday 05 April 2026 00:53:59 +0000 (0:00:00.293) 0:07:11.237 ********** 2026-04-05 00:58:11.093779 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:58:11.093783 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:58:11.093787 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:58:11.093791 | orchestrator | 2026-04-05 00:58:11.093795 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-04-05 00:58:11.093799 | orchestrator | Sunday 05 April 2026 00:53:59 +0000 (0:00:00.467) 0:07:11.705 ********** 2026-04-05 00:58:11.093803 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:58:11.093807 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:58:11.093811 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:58:11.093815 | orchestrator | 2026-04-05 00:58:11.093819 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-04-05 00:58:11.093823 | orchestrator | Sunday 05 April 2026 00:54:00 +0000 (0:00:00.297) 0:07:12.002 ********** 2026-04-05 00:58:11.093827 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:58:11.093831 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:58:11.093838 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:58:11.093842 | orchestrator | 2026-04-05 00:58:11.093846 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-04-05 00:58:11.093850 | orchestrator | Sunday 05 April 2026 00:54:00 +0000 (0:00:00.732) 0:07:12.735 ********** 2026-04-05 00:58:11.093855 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:58:11.093859 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:58:11.093863 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:58:11.093867 | orchestrator | 2026-04-05 00:58:11.093871 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-04-05 00:58:11.093875 | orchestrator | Sunday 05 April 2026 00:54:01 +0000 (0:00:00.657) 0:07:13.393 ********** 2026-04-05 00:58:11.093879 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:58:11.093883 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:58:11.093887 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:58:11.093891 | orchestrator | 2026-04-05 00:58:11.093895 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-04-05 00:58:11.093899 | orchestrator | Sunday 05 April 2026 00:54:02 +0000 (0:00:00.456) 0:07:13.850 ********** 2026-04-05 00:58:11.093903 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:58:11.093907 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:58:11.093911 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:58:11.093916 | orchestrator | 2026-04-05 00:58:11.093920 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-04-05 00:58:11.093924 | orchestrator | Sunday 05 April 2026 00:54:02 +0000 (0:00:00.343) 0:07:14.193 ********** 2026-04-05 00:58:11.093928 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:58:11.093932 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:58:11.093936 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:58:11.093940 | orchestrator | 2026-04-05 00:58:11.093944 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-04-05 00:58:11.093948 | orchestrator | Sunday 05 April 2026 00:54:02 +0000 (0:00:00.324) 0:07:14.517 ********** 2026-04-05 00:58:11.093952 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:58:11.093956 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:58:11.093960 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:58:11.093964 | orchestrator | 2026-04-05 00:58:11.093968 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-04-05 00:58:11.093972 | orchestrator | Sunday 05 April 2026 00:54:03 +0000 (0:00:00.322) 0:07:14.840 ********** 2026-04-05 00:58:11.093976 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:58:11.093981 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:58:11.093985 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:58:11.093989 | orchestrator | 2026-04-05 00:58:11.093993 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-04-05 00:58:11.093997 | orchestrator | Sunday 05 April 2026 00:54:03 +0000 (0:00:00.669) 0:07:15.510 ********** 2026-04-05 00:58:11.094004 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:58:11.094008 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:58:11.094012 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:58:11.094043 | orchestrator | 2026-04-05 00:58:11.094060 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-04-05 00:58:11.094065 | orchestrator | Sunday 05 April 2026 00:54:04 +0000 (0:00:00.302) 0:07:15.813 ********** 2026-04-05 00:58:11.094069 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:58:11.094073 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:58:11.094078 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:58:11.094082 | orchestrator | 2026-04-05 00:58:11.094086 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-04-05 00:58:11.094090 | orchestrator | Sunday 05 April 2026 00:54:04 +0000 (0:00:00.312) 0:07:16.125 ********** 2026-04-05 00:58:11.094094 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:58:11.094098 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:58:11.094147 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:58:11.094152 | orchestrator | 2026-04-05 00:58:11.094156 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-04-05 00:58:11.094160 | orchestrator | Sunday 05 April 2026 00:54:04 +0000 (0:00:00.307) 0:07:16.433 ********** 2026-04-05 00:58:11.094164 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:58:11.094168 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:58:11.094172 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:58:11.094176 | orchestrator | 2026-04-05 00:58:11.094180 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-04-05 00:58:11.094184 | orchestrator | Sunday 05 April 2026 00:54:05 +0000 (0:00:00.648) 0:07:17.081 ********** 2026-04-05 00:58:11.094188 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:58:11.094192 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:58:11.094196 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:58:11.094200 | orchestrator | 2026-04-05 00:58:11.094205 | orchestrator | TASK [ceph-osd : Set_fact add_osd] ********************************************* 2026-04-05 00:58:11.094209 | orchestrator | Sunday 05 April 2026 00:54:05 +0000 (0:00:00.587) 0:07:17.668 ********** 2026-04-05 00:58:11.094213 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:58:11.094217 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:58:11.094221 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:58:11.094225 | orchestrator | 2026-04-05 00:58:11.094229 | orchestrator | TASK [ceph-osd : Set_fact container_exec_cmd] ********************************** 2026-04-05 00:58:11.094233 | orchestrator | Sunday 05 April 2026 00:54:06 +0000 (0:00:00.377) 0:07:18.046 ********** 2026-04-05 00:58:11.094237 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-05 00:58:11.094241 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-05 00:58:11.094245 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-05 00:58:11.094249 | orchestrator | 2026-04-05 00:58:11.094253 | orchestrator | TASK [ceph-osd : Include_tasks system_tuning.yml] ****************************** 2026-04-05 00:58:11.094257 | orchestrator | Sunday 05 April 2026 00:54:07 +0000 (0:00:01.069) 0:07:19.116 ********** 2026-04-05 00:58:11.094262 | orchestrator | included: /ansible/roles/ceph-osd/tasks/system_tuning.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-05 00:58:11.094266 | orchestrator | 2026-04-05 00:58:11.094270 | orchestrator | TASK [ceph-osd : Create tmpfiles.d directory] ********************************** 2026-04-05 00:58:11.094274 | orchestrator | Sunday 05 April 2026 00:54:08 +0000 (0:00:00.778) 0:07:19.895 ********** 2026-04-05 00:58:11.094281 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:58:11.094285 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:58:11.094289 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:58:11.094292 | orchestrator | 2026-04-05 00:58:11.094296 | orchestrator | TASK [ceph-osd : Disable transparent hugepage] ********************************* 2026-04-05 00:58:11.094300 | orchestrator | Sunday 05 April 2026 00:54:08 +0000 (0:00:00.296) 0:07:20.192 ********** 2026-04-05 00:58:11.094308 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:58:11.094311 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:58:11.094315 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:58:11.094319 | orchestrator | 2026-04-05 00:58:11.094322 | orchestrator | TASK [ceph-osd : Get default vm.min_free_kbytes] ******************************* 2026-04-05 00:58:11.094326 | orchestrator | Sunday 05 April 2026 00:54:08 +0000 (0:00:00.367) 0:07:20.559 ********** 2026-04-05 00:58:11.094330 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:58:11.094334 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:58:11.094337 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:58:11.094341 | orchestrator | 2026-04-05 00:58:11.094345 | orchestrator | TASK [ceph-osd : Set_fact vm_min_free_kbytes] ********************************** 2026-04-05 00:58:11.094348 | orchestrator | Sunday 05 April 2026 00:54:09 +0000 (0:00:01.016) 0:07:21.576 ********** 2026-04-05 00:58:11.094352 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:58:11.094356 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:58:11.094360 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:58:11.094363 | orchestrator | 2026-04-05 00:58:11.094367 | orchestrator | TASK [ceph-osd : Apply operating system tuning] ******************************** 2026-04-05 00:58:11.094371 | orchestrator | Sunday 05 April 2026 00:54:10 +0000 (0:00:00.331) 0:07:21.907 ********** 2026-04-05 00:58:11.094374 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2026-04-05 00:58:11.094378 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2026-04-05 00:58:11.094382 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2026-04-05 00:58:11.094386 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.file-max', 'value': 26234859}) 2026-04-05 00:58:11.094389 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.file-max', 'value': 26234859}) 2026-04-05 00:58:11.094393 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.file-max', 'value': 26234859}) 2026-04-05 00:58:11.094397 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2026-04-05 00:58:11.094401 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2026-04-05 00:58:11.094407 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2026-04-05 00:58:11.094411 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.swappiness', 'value': 10}) 2026-04-05 00:58:11.094414 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.swappiness', 'value': 10}) 2026-04-05 00:58:11.094418 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.swappiness', 'value': 10}) 2026-04-05 00:58:11.094422 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2026-04-05 00:58:11.094426 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2026-04-05 00:58:11.094429 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2026-04-05 00:58:11.094433 | orchestrator | 2026-04-05 00:58:11.094437 | orchestrator | TASK [ceph-osd : Install dependencies] ***************************************** 2026-04-05 00:58:11.094441 | orchestrator | Sunday 05 April 2026 00:54:12 +0000 (0:00:02.201) 0:07:24.109 ********** 2026-04-05 00:58:11.094444 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:58:11.094448 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:58:11.094452 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:58:11.094455 | orchestrator | 2026-04-05 00:58:11.094459 | orchestrator | TASK [ceph-osd : Include_tasks common.yml] ************************************* 2026-04-05 00:58:11.094463 | orchestrator | Sunday 05 April 2026 00:54:12 +0000 (0:00:00.375) 0:07:24.484 ********** 2026-04-05 00:58:11.094467 | orchestrator | included: /ansible/roles/ceph-osd/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-05 00:58:11.094474 | orchestrator | 2026-04-05 00:58:11.094477 | orchestrator | TASK [ceph-osd : Create bootstrap-osd and osd directories] ********************* 2026-04-05 00:58:11.094481 | orchestrator | Sunday 05 April 2026 00:54:13 +0000 (0:00:00.842) 0:07:25.327 ********** 2026-04-05 00:58:11.094485 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd/) 2026-04-05 00:58:11.094488 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd/) 2026-04-05 00:58:11.094492 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd/) 2026-04-05 00:58:11.094496 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/osd/) 2026-04-05 00:58:11.094500 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/osd/) 2026-04-05 00:58:11.094503 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/osd/) 2026-04-05 00:58:11.094507 | orchestrator | 2026-04-05 00:58:11.094511 | orchestrator | TASK [ceph-osd : Get keys from monitors] *************************************** 2026-04-05 00:58:11.094514 | orchestrator | Sunday 05 April 2026 00:54:14 +0000 (0:00:01.063) 0:07:26.391 ********** 2026-04-05 00:58:11.094518 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-05 00:58:11.094522 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-04-05 00:58:11.094525 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-04-05 00:58:11.094529 | orchestrator | 2026-04-05 00:58:11.094533 | orchestrator | TASK [ceph-osd : Copy ceph key(s) if needed] *********************************** 2026-04-05 00:58:11.094539 | orchestrator | Sunday 05 April 2026 00:54:16 +0000 (0:00:01.783) 0:07:28.175 ********** 2026-04-05 00:58:11.094543 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-04-05 00:58:11.094547 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-04-05 00:58:11.094551 | orchestrator | changed: [testbed-node-3] 2026-04-05 00:58:11.094554 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-04-05 00:58:11.094558 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-04-05 00:58:11.094562 | orchestrator | changed: [testbed-node-4] 2026-04-05 00:58:11.094566 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-04-05 00:58:11.094569 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-04-05 00:58:11.094573 | orchestrator | changed: [testbed-node-5] 2026-04-05 00:58:11.094577 | orchestrator | 2026-04-05 00:58:11.094580 | orchestrator | TASK [ceph-osd : Set noup flag] ************************************************ 2026-04-05 00:58:11.094584 | orchestrator | Sunday 05 April 2026 00:54:17 +0000 (0:00:01.411) 0:07:29.586 ********** 2026-04-05 00:58:11.094588 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-04-05 00:58:11.094591 | orchestrator | 2026-04-05 00:58:11.094595 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm.yml] ****************************** 2026-04-05 00:58:11.094599 | orchestrator | Sunday 05 April 2026 00:54:19 +0000 (0:00:02.036) 0:07:31.622 ********** 2026-04-05 00:58:11.094603 | orchestrator | included: /ansible/roles/ceph-osd/tasks/scenarios/lvm.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-05 00:58:11.094606 | orchestrator | 2026-04-05 00:58:11.094610 | orchestrator | TASK [ceph-osd : Use ceph-volume to create osds] ******************************* 2026-04-05 00:58:11.094614 | orchestrator | Sunday 05 April 2026 00:54:20 +0000 (0:00:00.584) 0:07:32.207 ********** 2026-04-05 00:58:11.094618 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-84662fb7-c7ec-5f43-83c1-849532919194', 'data_vg': 'ceph-84662fb7-c7ec-5f43-83c1-849532919194'}) 2026-04-05 00:58:11.094624 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-9657aa76-f30a-575f-81fa-dc230eadde03', 'data_vg': 'ceph-9657aa76-f30a-575f-81fa-dc230eadde03'}) 2026-04-05 00:58:11.094628 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-01ae77dd-7b74-52e9-8a2e-c19e3ec8ad7a', 'data_vg': 'ceph-01ae77dd-7b74-52e9-8a2e-c19e3ec8ad7a'}) 2026-04-05 00:58:11.094632 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-8a27db0d-e52c-5340-bfad-66c075ab1c61', 'data_vg': 'ceph-8a27db0d-e52c-5340-bfad-66c075ab1c61'}) 2026-04-05 00:58:11.094642 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-df39e39b-9449-5ecb-9afa-151663e06960', 'data_vg': 'ceph-df39e39b-9449-5ecb-9afa-151663e06960'}) 2026-04-05 00:58:11.094650 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-1dbeab33-88c6-544f-8f85-2175dc04d523', 'data_vg': 'ceph-1dbeab33-88c6-544f-8f85-2175dc04d523'}) 2026-04-05 00:58:11.094654 | orchestrator | 2026-04-05 00:58:11.094657 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm-batch.yml] ************************ 2026-04-05 00:58:11.094661 | orchestrator | Sunday 05 April 2026 00:54:56 +0000 (0:00:35.751) 0:08:07.959 ********** 2026-04-05 00:58:11.094665 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:58:11.094669 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:58:11.094672 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:58:11.094676 | orchestrator | 2026-04-05 00:58:11.094680 | orchestrator | TASK [ceph-osd : Include_tasks start_osds.yml] ********************************* 2026-04-05 00:58:11.094684 | orchestrator | Sunday 05 April 2026 00:54:56 +0000 (0:00:00.638) 0:08:08.598 ********** 2026-04-05 00:58:11.094687 | orchestrator | included: /ansible/roles/ceph-osd/tasks/start_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-05 00:58:11.094691 | orchestrator | 2026-04-05 00:58:11.094695 | orchestrator | TASK [ceph-osd : Get osd ids] ************************************************** 2026-04-05 00:58:11.094699 | orchestrator | Sunday 05 April 2026 00:54:57 +0000 (0:00:00.536) 0:08:09.134 ********** 2026-04-05 00:58:11.094702 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:58:11.094706 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:58:11.094710 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:58:11.094714 | orchestrator | 2026-04-05 00:58:11.094718 | orchestrator | TASK [ceph-osd : Collect osd ids] ********************************************** 2026-04-05 00:58:11.094721 | orchestrator | Sunday 05 April 2026 00:54:57 +0000 (0:00:00.636) 0:08:09.771 ********** 2026-04-05 00:58:11.094725 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:58:11.094729 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:58:11.094732 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:58:11.094736 | orchestrator | 2026-04-05 00:58:11.094740 | orchestrator | TASK [ceph-osd : Include_tasks systemd.yml] ************************************ 2026-04-05 00:58:11.094744 | orchestrator | Sunday 05 April 2026 00:54:59 +0000 (0:00:01.910) 0:08:11.681 ********** 2026-04-05 00:58:11.094747 | orchestrator | included: /ansible/roles/ceph-osd/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-05 00:58:11.094751 | orchestrator | 2026-04-05 00:58:11.094755 | orchestrator | TASK [ceph-osd : Generate systemd unit file] *********************************** 2026-04-05 00:58:11.094759 | orchestrator | Sunday 05 April 2026 00:55:00 +0000 (0:00:00.546) 0:08:12.228 ********** 2026-04-05 00:58:11.094762 | orchestrator | changed: [testbed-node-3] 2026-04-05 00:58:11.094766 | orchestrator | changed: [testbed-node-4] 2026-04-05 00:58:11.094770 | orchestrator | changed: [testbed-node-5] 2026-04-05 00:58:11.094773 | orchestrator | 2026-04-05 00:58:11.094777 | orchestrator | TASK [ceph-osd : Generate systemd ceph-osd target file] ************************ 2026-04-05 00:58:11.094781 | orchestrator | Sunday 05 April 2026 00:55:01 +0000 (0:00:01.223) 0:08:13.451 ********** 2026-04-05 00:58:11.094785 | orchestrator | changed: [testbed-node-3] 2026-04-05 00:58:11.094788 | orchestrator | changed: [testbed-node-4] 2026-04-05 00:58:11.094792 | orchestrator | changed: [testbed-node-5] 2026-04-05 00:58:11.094796 | orchestrator | 2026-04-05 00:58:11.094799 | orchestrator | TASK [ceph-osd : Enable ceph-osd.target] *************************************** 2026-04-05 00:58:11.094807 | orchestrator | Sunday 05 April 2026 00:55:03 +0000 (0:00:01.694) 0:08:15.146 ********** 2026-04-05 00:58:11.094811 | orchestrator | changed: [testbed-node-3] 2026-04-05 00:58:11.094815 | orchestrator | changed: [testbed-node-5] 2026-04-05 00:58:11.094819 | orchestrator | changed: [testbed-node-4] 2026-04-05 00:58:11.094822 | orchestrator | 2026-04-05 00:58:11.094826 | orchestrator | TASK [ceph-osd : Ensure systemd service override directory exists] ************* 2026-04-05 00:58:11.094830 | orchestrator | Sunday 05 April 2026 00:55:05 +0000 (0:00:01.758) 0:08:16.904 ********** 2026-04-05 00:58:11.094834 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:58:11.094841 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:58:11.094844 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:58:11.094848 | orchestrator | 2026-04-05 00:58:11.094852 | orchestrator | TASK [ceph-osd : Add ceph-osd systemd service overrides] *********************** 2026-04-05 00:58:11.094855 | orchestrator | Sunday 05 April 2026 00:55:05 +0000 (0:00:00.418) 0:08:17.323 ********** 2026-04-05 00:58:11.094859 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:58:11.094863 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:58:11.094867 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:58:11.094870 | orchestrator | 2026-04-05 00:58:11.094874 | orchestrator | TASK [ceph-osd : Ensure /var/lib/ceph/osd/- is present] ********* 2026-04-05 00:58:11.094878 | orchestrator | Sunday 05 April 2026 00:55:05 +0000 (0:00:00.333) 0:08:17.657 ********** 2026-04-05 00:58:11.094881 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-04-05 00:58:11.094885 | orchestrator | ok: [testbed-node-4] => (item=1) 2026-04-05 00:58:11.094889 | orchestrator | ok: [testbed-node-3] => (item=4) 2026-04-05 00:58:11.094893 | orchestrator | ok: [testbed-node-4] => (item=3) 2026-04-05 00:58:11.094896 | orchestrator | ok: [testbed-node-5] => (item=5) 2026-04-05 00:58:11.094900 | orchestrator | ok: [testbed-node-5] => (item=2) 2026-04-05 00:58:11.094904 | orchestrator | 2026-04-05 00:58:11.094908 | orchestrator | TASK [ceph-osd : Write run file in /var/lib/ceph/osd/xxxx/run] ***************** 2026-04-05 00:58:11.094911 | orchestrator | Sunday 05 April 2026 00:55:07 +0000 (0:00:01.522) 0:08:19.179 ********** 2026-04-05 00:58:11.094915 | orchestrator | changed: [testbed-node-3] => (item=0) 2026-04-05 00:58:11.094919 | orchestrator | changed: [testbed-node-4] => (item=1) 2026-04-05 00:58:11.094922 | orchestrator | changed: [testbed-node-5] => (item=5) 2026-04-05 00:58:11.094926 | orchestrator | changed: [testbed-node-3] => (item=4) 2026-04-05 00:58:11.094930 | orchestrator | changed: [testbed-node-4] => (item=3) 2026-04-05 00:58:11.094933 | orchestrator | changed: [testbed-node-5] => (item=2) 2026-04-05 00:58:11.094937 | orchestrator | 2026-04-05 00:58:11.094941 | orchestrator | TASK [ceph-osd : Systemd start osd] ******************************************** 2026-04-05 00:58:11.094945 | orchestrator | Sunday 05 April 2026 00:55:09 +0000 (0:00:02.216) 0:08:21.396 ********** 2026-04-05 00:58:11.094948 | orchestrator | changed: [testbed-node-3] => (item=0) 2026-04-05 00:58:11.094952 | orchestrator | changed: [testbed-node-4] => (item=1) 2026-04-05 00:58:11.094958 | orchestrator | changed: [testbed-node-5] => (item=5) 2026-04-05 00:58:11.094962 | orchestrator | changed: [testbed-node-3] => (item=4) 2026-04-05 00:58:11.094966 | orchestrator | changed: [testbed-node-4] => (item=3) 2026-04-05 00:58:11.094970 | orchestrator | changed: [testbed-node-5] => (item=2) 2026-04-05 00:58:11.094973 | orchestrator | 2026-04-05 00:58:11.094977 | orchestrator | TASK [ceph-osd : Unset noup flag] ********************************************** 2026-04-05 00:58:11.094981 | orchestrator | Sunday 05 April 2026 00:55:13 +0000 (0:00:03.648) 0:08:25.044 ********** 2026-04-05 00:58:11.094985 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:58:11.094988 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:58:11.094992 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-04-05 00:58:11.094996 | orchestrator | 2026-04-05 00:58:11.095000 | orchestrator | TASK [ceph-osd : Wait for all osd to be up] ************************************ 2026-04-05 00:58:11.095003 | orchestrator | Sunday 05 April 2026 00:55:16 +0000 (0:00:02.834) 0:08:27.878 ********** 2026-04-05 00:58:11.095007 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:58:11.095011 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:58:11.095014 | orchestrator | FAILED - RETRYING: [testbed-node-5 -> testbed-node-0]: Wait for all osd to be up (60 retries left). 2026-04-05 00:58:11.095018 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-04-05 00:58:11.095022 | orchestrator | 2026-04-05 00:58:11.095026 | orchestrator | TASK [ceph-osd : Include crush_rules.yml] ************************************** 2026-04-05 00:58:11.095030 | orchestrator | Sunday 05 April 2026 00:55:29 +0000 (0:00:13.233) 0:08:41.112 ********** 2026-04-05 00:58:11.095037 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:58:11.095041 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:58:11.095045 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:58:11.095048 | orchestrator | 2026-04-05 00:58:11.095052 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-04-05 00:58:11.095056 | orchestrator | Sunday 05 April 2026 00:55:30 +0000 (0:00:00.852) 0:08:41.964 ********** 2026-04-05 00:58:11.095060 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:58:11.095063 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:58:11.095067 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:58:11.095071 | orchestrator | 2026-04-05 00:58:11.095075 | orchestrator | RUNNING HANDLER [ceph-handler : Osds handler] ********************************** 2026-04-05 00:58:11.095078 | orchestrator | Sunday 05 April 2026 00:55:30 +0000 (0:00:00.638) 0:08:42.603 ********** 2026-04-05 00:58:11.095082 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-05 00:58:11.095086 | orchestrator | 2026-04-05 00:58:11.095090 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact trigger_restart] ********************** 2026-04-05 00:58:11.095093 | orchestrator | Sunday 05 April 2026 00:55:31 +0000 (0:00:00.575) 0:08:43.178 ********** 2026-04-05 00:58:11.095097 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-05 00:58:11.095112 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-05 00:58:11.095116 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-05 00:58:11.095120 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:58:11.095124 | orchestrator | 2026-04-05 00:58:11.095130 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called before restart] ******** 2026-04-05 00:58:11.095134 | orchestrator | Sunday 05 April 2026 00:55:31 +0000 (0:00:00.421) 0:08:43.600 ********** 2026-04-05 00:58:11.095138 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:58:11.095141 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:58:11.095145 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:58:11.095149 | orchestrator | 2026-04-05 00:58:11.095152 | orchestrator | RUNNING HANDLER [ceph-handler : Unset noup flag] ******************************* 2026-04-05 00:58:11.095156 | orchestrator | Sunday 05 April 2026 00:55:32 +0000 (0:00:00.322) 0:08:43.922 ********** 2026-04-05 00:58:11.095160 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:58:11.095163 | orchestrator | 2026-04-05 00:58:11.095167 | orchestrator | RUNNING HANDLER [ceph-handler : Copy osd restart script] *********************** 2026-04-05 00:58:11.095171 | orchestrator | Sunday 05 April 2026 00:55:32 +0000 (0:00:00.704) 0:08:44.627 ********** 2026-04-05 00:58:11.095175 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:58:11.095178 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:58:11.095182 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:58:11.095186 | orchestrator | 2026-04-05 00:58:11.095189 | orchestrator | RUNNING HANDLER [ceph-handler : Get pool list] ********************************* 2026-04-05 00:58:11.095193 | orchestrator | Sunday 05 April 2026 00:55:33 +0000 (0:00:00.295) 0:08:44.923 ********** 2026-04-05 00:58:11.095197 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:58:11.095201 | orchestrator | 2026-04-05 00:58:11.095204 | orchestrator | RUNNING HANDLER [ceph-handler : Get balancer module status] ******************** 2026-04-05 00:58:11.095208 | orchestrator | Sunday 05 April 2026 00:55:33 +0000 (0:00:00.211) 0:08:45.135 ********** 2026-04-05 00:58:11.095212 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:58:11.095215 | orchestrator | 2026-04-05 00:58:11.095219 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact pools_pgautoscaler_mode] ************** 2026-04-05 00:58:11.095223 | orchestrator | Sunday 05 April 2026 00:55:33 +0000 (0:00:00.221) 0:08:45.356 ********** 2026-04-05 00:58:11.095227 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:58:11.095230 | orchestrator | 2026-04-05 00:58:11.095234 | orchestrator | RUNNING HANDLER [ceph-handler : Disable balancer] ****************************** 2026-04-05 00:58:11.095238 | orchestrator | Sunday 05 April 2026 00:55:33 +0000 (0:00:00.101) 0:08:45.458 ********** 2026-04-05 00:58:11.095241 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:58:11.095248 | orchestrator | 2026-04-05 00:58:11.095252 | orchestrator | RUNNING HANDLER [ceph-handler : Disable pg autoscale on pools] ***************** 2026-04-05 00:58:11.095256 | orchestrator | Sunday 05 April 2026 00:55:33 +0000 (0:00:00.281) 0:08:45.739 ********** 2026-04-05 00:58:11.095260 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:58:11.095263 | orchestrator | 2026-04-05 00:58:11.095267 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph osds daemon(s)] ******************* 2026-04-05 00:58:11.095271 | orchestrator | Sunday 05 April 2026 00:55:34 +0000 (0:00:00.228) 0:08:45.967 ********** 2026-04-05 00:58:11.095277 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-05 00:58:11.095281 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-05 00:58:11.095285 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-05 00:58:11.095289 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:58:11.095292 | orchestrator | 2026-04-05 00:58:11.095296 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called after restart] ********* 2026-04-05 00:58:11.095300 | orchestrator | Sunday 05 April 2026 00:55:34 +0000 (0:00:00.324) 0:08:46.292 ********** 2026-04-05 00:58:11.095303 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:58:11.095307 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:58:11.095311 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:58:11.095315 | orchestrator | 2026-04-05 00:58:11.095318 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable pg autoscale on pools] *************** 2026-04-05 00:58:11.095322 | orchestrator | Sunday 05 April 2026 00:55:34 +0000 (0:00:00.329) 0:08:46.621 ********** 2026-04-05 00:58:11.095326 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:58:11.095329 | orchestrator | 2026-04-05 00:58:11.095333 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable balancer] **************************** 2026-04-05 00:58:11.095337 | orchestrator | Sunday 05 April 2026 00:55:35 +0000 (0:00:00.633) 0:08:47.254 ********** 2026-04-05 00:58:11.095340 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:58:11.095344 | orchestrator | 2026-04-05 00:58:11.095348 | orchestrator | PLAY [Apply role ceph-crash] *************************************************** 2026-04-05 00:58:11.095352 | orchestrator | 2026-04-05 00:58:11.095355 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-04-05 00:58:11.095359 | orchestrator | Sunday 05 April 2026 00:55:36 +0000 (0:00:00.598) 0:08:47.853 ********** 2026-04-05 00:58:11.095363 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 00:58:11.095368 | orchestrator | 2026-04-05 00:58:11.095371 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-04-05 00:58:11.095375 | orchestrator | Sunday 05 April 2026 00:55:37 +0000 (0:00:01.088) 0:08:48.941 ********** 2026-04-05 00:58:11.095379 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 00:58:11.095382 | orchestrator | 2026-04-05 00:58:11.095386 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-04-05 00:58:11.095390 | orchestrator | Sunday 05 April 2026 00:55:38 +0000 (0:00:01.125) 0:08:50.067 ********** 2026-04-05 00:58:11.095394 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:58:11.095397 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:58:11.095401 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:58:11.095405 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:58:11.095408 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:58:11.095412 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:58:11.095416 | orchestrator | 2026-04-05 00:58:11.095420 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-04-05 00:58:11.095423 | orchestrator | Sunday 05 April 2026 00:55:39 +0000 (0:00:01.169) 0:08:51.237 ********** 2026-04-05 00:58:11.095430 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:58:11.095434 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:58:11.095441 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:58:11.095444 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:58:11.095448 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:58:11.095452 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:58:11.095455 | orchestrator | 2026-04-05 00:58:11.095459 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-04-05 00:58:11.095463 | orchestrator | Sunday 05 April 2026 00:55:40 +0000 (0:00:00.739) 0:08:51.976 ********** 2026-04-05 00:58:11.095466 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:58:11.095470 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:58:11.095474 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:58:11.095478 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:58:11.095481 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:58:11.095485 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:58:11.095489 | orchestrator | 2026-04-05 00:58:11.095492 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-04-05 00:58:11.095496 | orchestrator | Sunday 05 April 2026 00:55:41 +0000 (0:00:00.964) 0:08:52.940 ********** 2026-04-05 00:58:11.095500 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:58:11.095503 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:58:11.095507 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:58:11.095511 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:58:11.095514 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:58:11.095518 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:58:11.095522 | orchestrator | 2026-04-05 00:58:11.095525 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-04-05 00:58:11.095529 | orchestrator | Sunday 05 April 2026 00:55:41 +0000 (0:00:00.746) 0:08:53.687 ********** 2026-04-05 00:58:11.095533 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:58:11.095537 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:58:11.095540 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:58:11.095544 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:58:11.095548 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:58:11.095551 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:58:11.095555 | orchestrator | 2026-04-05 00:58:11.095559 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-04-05 00:58:11.095562 | orchestrator | Sunday 05 April 2026 00:55:43 +0000 (0:00:01.286) 0:08:54.973 ********** 2026-04-05 00:58:11.095566 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:58:11.095570 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:58:11.095573 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:58:11.095577 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:58:11.095581 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:58:11.095584 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:58:11.095588 | orchestrator | 2026-04-05 00:58:11.095592 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-04-05 00:58:11.095596 | orchestrator | Sunday 05 April 2026 00:55:43 +0000 (0:00:00.612) 0:08:55.586 ********** 2026-04-05 00:58:11.095599 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:58:11.095605 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:58:11.095609 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:58:11.095613 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:58:11.095617 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:58:11.095620 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:58:11.095624 | orchestrator | 2026-04-05 00:58:11.095628 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-04-05 00:58:11.095631 | orchestrator | Sunday 05 April 2026 00:55:44 +0000 (0:00:00.593) 0:08:56.180 ********** 2026-04-05 00:58:11.095635 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:58:11.095639 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:58:11.095643 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:58:11.095646 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:58:11.095650 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:58:11.095654 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:58:11.095660 | orchestrator | 2026-04-05 00:58:11.095664 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-04-05 00:58:11.095668 | orchestrator | Sunday 05 April 2026 00:55:45 +0000 (0:00:01.413) 0:08:57.593 ********** 2026-04-05 00:58:11.095672 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:58:11.095675 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:58:11.095679 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:58:11.095683 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:58:11.095686 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:58:11.095690 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:58:11.095694 | orchestrator | 2026-04-05 00:58:11.095697 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-04-05 00:58:11.095701 | orchestrator | Sunday 05 April 2026 00:55:46 +0000 (0:00:01.065) 0:08:58.659 ********** 2026-04-05 00:58:11.095705 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:58:11.095709 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:58:11.095712 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:58:11.095716 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:58:11.095720 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:58:11.095723 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:58:11.095727 | orchestrator | 2026-04-05 00:58:11.095731 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-04-05 00:58:11.095735 | orchestrator | Sunday 05 April 2026 00:55:47 +0000 (0:00:00.939) 0:08:59.598 ********** 2026-04-05 00:58:11.095738 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:58:11.095742 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:58:11.095746 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:58:11.095749 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:58:11.095753 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:58:11.095757 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:58:11.095760 | orchestrator | 2026-04-05 00:58:11.095764 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-04-05 00:58:11.095768 | orchestrator | Sunday 05 April 2026 00:55:48 +0000 (0:00:00.614) 0:09:00.212 ********** 2026-04-05 00:58:11.095771 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:58:11.095775 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:58:11.095779 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:58:11.095783 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:58:11.095786 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:58:11.095790 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:58:11.095794 | orchestrator | 2026-04-05 00:58:11.095797 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-04-05 00:58:11.095804 | orchestrator | Sunday 05 April 2026 00:55:49 +0000 (0:00:00.931) 0:09:01.143 ********** 2026-04-05 00:58:11.095807 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:58:11.095811 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:58:11.095815 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:58:11.095819 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:58:11.095822 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:58:11.095826 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:58:11.095830 | orchestrator | 2026-04-05 00:58:11.095833 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-04-05 00:58:11.095837 | orchestrator | Sunday 05 April 2026 00:55:49 +0000 (0:00:00.601) 0:09:01.745 ********** 2026-04-05 00:58:11.095841 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:58:11.095844 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:58:11.095848 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:58:11.095852 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:58:11.095856 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:58:11.095859 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:58:11.095863 | orchestrator | 2026-04-05 00:58:11.095867 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-04-05 00:58:11.095870 | orchestrator | Sunday 05 April 2026 00:55:50 +0000 (0:00:00.918) 0:09:02.663 ********** 2026-04-05 00:58:11.095874 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:58:11.095881 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:58:11.095884 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:58:11.095888 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:58:11.095892 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:58:11.095895 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:58:11.095899 | orchestrator | 2026-04-05 00:58:11.095903 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-04-05 00:58:11.095907 | orchestrator | Sunday 05 April 2026 00:55:51 +0000 (0:00:00.611) 0:09:03.275 ********** 2026-04-05 00:58:11.095910 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:58:11.095914 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:58:11.095918 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:58:11.095921 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:58:11.095925 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:58:11.095929 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:58:11.095932 | orchestrator | 2026-04-05 00:58:11.095936 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-04-05 00:58:11.095940 | orchestrator | Sunday 05 April 2026 00:55:52 +0000 (0:00:00.959) 0:09:04.234 ********** 2026-04-05 00:58:11.095943 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:58:11.095947 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:58:11.095951 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:58:11.095954 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:58:11.095958 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:58:11.095962 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:58:11.095966 | orchestrator | 2026-04-05 00:58:11.095969 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-04-05 00:58:11.095975 | orchestrator | Sunday 05 April 2026 00:55:53 +0000 (0:00:00.652) 0:09:04.887 ********** 2026-04-05 00:58:11.095979 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:58:11.095983 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:58:11.095986 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:58:11.095990 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:58:11.095994 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:58:11.095998 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:58:11.096001 | orchestrator | 2026-04-05 00:58:11.096005 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-04-05 00:58:11.096009 | orchestrator | Sunday 05 April 2026 00:55:54 +0000 (0:00:00.973) 0:09:05.860 ********** 2026-04-05 00:58:11.096013 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:58:11.096016 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:58:11.096020 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:58:11.096024 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:58:11.096027 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:58:11.096031 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:58:11.096035 | orchestrator | 2026-04-05 00:58:11.096039 | orchestrator | TASK [ceph-crash : Create client.crash keyring] ******************************** 2026-04-05 00:58:11.096042 | orchestrator | Sunday 05 April 2026 00:55:55 +0000 (0:00:01.276) 0:09:07.137 ********** 2026-04-05 00:58:11.096046 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-04-05 00:58:11.096050 | orchestrator | 2026-04-05 00:58:11.096054 | orchestrator | TASK [ceph-crash : Get keys from monitors] ************************************* 2026-04-05 00:58:11.096057 | orchestrator | Sunday 05 April 2026 00:55:58 +0000 (0:00:03.295) 0:09:10.433 ********** 2026-04-05 00:58:11.096061 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-04-05 00:58:11.096065 | orchestrator | 2026-04-05 00:58:11.096068 | orchestrator | TASK [ceph-crash : Copy ceph key(s) if needed] ********************************* 2026-04-05 00:58:11.096072 | orchestrator | Sunday 05 April 2026 00:56:00 +0000 (0:00:01.739) 0:09:12.172 ********** 2026-04-05 00:58:11.096076 | orchestrator | changed: [testbed-node-3] 2026-04-05 00:58:11.096080 | orchestrator | changed: [testbed-node-4] 2026-04-05 00:58:11.096083 | orchestrator | changed: [testbed-node-5] 2026-04-05 00:58:11.096087 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:58:11.096091 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:58:11.096097 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:58:11.096111 | orchestrator | 2026-04-05 00:58:11.096115 | orchestrator | TASK [ceph-crash : Create /var/lib/ceph/crash/posted] ************************** 2026-04-05 00:58:11.096119 | orchestrator | Sunday 05 April 2026 00:56:01 +0000 (0:00:01.537) 0:09:13.710 ********** 2026-04-05 00:58:11.096122 | orchestrator | changed: [testbed-node-3] 2026-04-05 00:58:11.096126 | orchestrator | changed: [testbed-node-4] 2026-04-05 00:58:11.096130 | orchestrator | changed: [testbed-node-5] 2026-04-05 00:58:11.096133 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:58:11.096137 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:58:11.096141 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:58:11.096144 | orchestrator | 2026-04-05 00:58:11.096148 | orchestrator | TASK [ceph-crash : Include_tasks systemd.yml] ********************************** 2026-04-05 00:58:11.096152 | orchestrator | Sunday 05 April 2026 00:56:03 +0000 (0:00:01.739) 0:09:15.450 ********** 2026-04-05 00:58:11.096156 | orchestrator | included: /ansible/roles/ceph-crash/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 00:58:11.096161 | orchestrator | 2026-04-05 00:58:11.096165 | orchestrator | TASK [ceph-crash : Generate systemd unit file for ceph-crash container] ******** 2026-04-05 00:58:11.096171 | orchestrator | Sunday 05 April 2026 00:56:04 +0000 (0:00:01.113) 0:09:16.564 ********** 2026-04-05 00:58:11.096175 | orchestrator | changed: [testbed-node-3] 2026-04-05 00:58:11.096178 | orchestrator | changed: [testbed-node-4] 2026-04-05 00:58:11.096182 | orchestrator | changed: [testbed-node-5] 2026-04-05 00:58:11.096186 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:58:11.096189 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:58:11.096193 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:58:11.096197 | orchestrator | 2026-04-05 00:58:11.096200 | orchestrator | TASK [ceph-crash : Start the ceph-crash service] ******************************* 2026-04-05 00:58:11.096204 | orchestrator | Sunday 05 April 2026 00:56:06 +0000 (0:00:01.518) 0:09:18.082 ********** 2026-04-05 00:58:11.096208 | orchestrator | changed: [testbed-node-3] 2026-04-05 00:58:11.096212 | orchestrator | changed: [testbed-node-4] 2026-04-05 00:58:11.096215 | orchestrator | changed: [testbed-node-5] 2026-04-05 00:58:11.096219 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:58:11.096222 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:58:11.096226 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:58:11.096230 | orchestrator | 2026-04-05 00:58:11.096234 | orchestrator | RUNNING HANDLER [ceph-handler : Ceph crash handler] **************************** 2026-04-05 00:58:11.096237 | orchestrator | Sunday 05 April 2026 00:56:09 +0000 (0:00:03.392) 0:09:21.475 ********** 2026-04-05 00:58:11.096241 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_crash.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 00:58:11.096245 | orchestrator | 2026-04-05 00:58:11.096249 | orchestrator | RUNNING HANDLER [ceph-handler : Set _crash_handler_called before restart] ****** 2026-04-05 00:58:11.096252 | orchestrator | Sunday 05 April 2026 00:56:10 +0000 (0:00:01.114) 0:09:22.589 ********** 2026-04-05 00:58:11.096256 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:58:11.096260 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:58:11.096264 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:58:11.096267 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:58:11.096271 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:58:11.096275 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:58:11.096278 | orchestrator | 2026-04-05 00:58:11.096282 | orchestrator | RUNNING HANDLER [ceph-handler : Restart the ceph-crash service] **************** 2026-04-05 00:58:11.096286 | orchestrator | Sunday 05 April 2026 00:56:11 +0000 (0:00:00.536) 0:09:23.126 ********** 2026-04-05 00:58:11.096290 | orchestrator | changed: [testbed-node-3] 2026-04-05 00:58:11.096293 | orchestrator | changed: [testbed-node-4] 2026-04-05 00:58:11.096297 | orchestrator | changed: [testbed-node-5] 2026-04-05 00:58:11.096301 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:58:11.096304 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:58:11.096311 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:58:11.096315 | orchestrator | 2026-04-05 00:58:11.096319 | orchestrator | RUNNING HANDLER [ceph-handler : Set _crash_handler_called after restart] ******* 2026-04-05 00:58:11.096325 | orchestrator | Sunday 05 April 2026 00:56:13 +0000 (0:00:02.347) 0:09:25.473 ********** 2026-04-05 00:58:11.096329 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:58:11.096333 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:58:11.096337 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:58:11.096340 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:58:11.096344 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:58:11.096348 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:58:11.096351 | orchestrator | 2026-04-05 00:58:11.096355 | orchestrator | PLAY [Apply role ceph-mds] ***************************************************** 2026-04-05 00:58:11.096359 | orchestrator | 2026-04-05 00:58:11.096363 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-04-05 00:58:11.096367 | orchestrator | Sunday 05 April 2026 00:56:14 +0000 (0:00:00.875) 0:09:26.349 ********** 2026-04-05 00:58:11.096371 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-05 00:58:11.096374 | orchestrator | 2026-04-05 00:58:11.096378 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-04-05 00:58:11.096382 | orchestrator | Sunday 05 April 2026 00:56:15 +0000 (0:00:00.801) 0:09:27.150 ********** 2026-04-05 00:58:11.096386 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-05 00:58:11.096389 | orchestrator | 2026-04-05 00:58:11.096393 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-04-05 00:58:11.096397 | orchestrator | Sunday 05 April 2026 00:56:15 +0000 (0:00:00.508) 0:09:27.659 ********** 2026-04-05 00:58:11.096401 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:58:11.096404 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:58:11.096408 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:58:11.096412 | orchestrator | 2026-04-05 00:58:11.096415 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-04-05 00:58:11.096419 | orchestrator | Sunday 05 April 2026 00:56:16 +0000 (0:00:00.609) 0:09:28.268 ********** 2026-04-05 00:58:11.096423 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:58:11.096426 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:58:11.096430 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:58:11.096434 | orchestrator | 2026-04-05 00:58:11.096438 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-04-05 00:58:11.096441 | orchestrator | Sunday 05 April 2026 00:56:17 +0000 (0:00:00.708) 0:09:28.977 ********** 2026-04-05 00:58:11.096445 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:58:11.096449 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:58:11.096452 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:58:11.096456 | orchestrator | 2026-04-05 00:58:11.096460 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-04-05 00:58:11.096464 | orchestrator | Sunday 05 April 2026 00:56:18 +0000 (0:00:00.817) 0:09:29.794 ********** 2026-04-05 00:58:11.096467 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:58:11.096471 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:58:11.096475 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:58:11.096478 | orchestrator | 2026-04-05 00:58:11.096482 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-04-05 00:58:11.096486 | orchestrator | Sunday 05 April 2026 00:56:18 +0000 (0:00:00.783) 0:09:30.578 ********** 2026-04-05 00:58:11.096490 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:58:11.096493 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:58:11.096499 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:58:11.096503 | orchestrator | 2026-04-05 00:58:11.096507 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-04-05 00:58:11.096511 | orchestrator | Sunday 05 April 2026 00:56:19 +0000 (0:00:00.662) 0:09:31.241 ********** 2026-04-05 00:58:11.096520 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:58:11.096524 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:58:11.096527 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:58:11.096531 | orchestrator | 2026-04-05 00:58:11.096535 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-04-05 00:58:11.096538 | orchestrator | Sunday 05 April 2026 00:56:19 +0000 (0:00:00.391) 0:09:31.632 ********** 2026-04-05 00:58:11.096542 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:58:11.096546 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:58:11.096550 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:58:11.096553 | orchestrator | 2026-04-05 00:58:11.096557 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-04-05 00:58:11.096561 | orchestrator | Sunday 05 April 2026 00:56:20 +0000 (0:00:00.345) 0:09:31.978 ********** 2026-04-05 00:58:11.096564 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:58:11.096568 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:58:11.096572 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:58:11.096575 | orchestrator | 2026-04-05 00:58:11.096579 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-04-05 00:58:11.096583 | orchestrator | Sunday 05 April 2026 00:56:20 +0000 (0:00:00.774) 0:09:32.752 ********** 2026-04-05 00:58:11.096587 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:58:11.096590 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:58:11.096594 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:58:11.096598 | orchestrator | 2026-04-05 00:58:11.096601 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-04-05 00:58:11.096605 | orchestrator | Sunday 05 April 2026 00:56:22 +0000 (0:00:01.093) 0:09:33.846 ********** 2026-04-05 00:58:11.096609 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:58:11.096613 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:58:11.096616 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:58:11.096620 | orchestrator | 2026-04-05 00:58:11.096624 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-04-05 00:58:11.096627 | orchestrator | Sunday 05 April 2026 00:56:22 +0000 (0:00:00.332) 0:09:34.179 ********** 2026-04-05 00:58:11.096631 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:58:11.096635 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:58:11.096638 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:58:11.096642 | orchestrator | 2026-04-05 00:58:11.096646 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-04-05 00:58:11.096650 | orchestrator | Sunday 05 April 2026 00:56:22 +0000 (0:00:00.320) 0:09:34.500 ********** 2026-04-05 00:58:11.096653 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:58:11.096657 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:58:11.096663 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:58:11.096667 | orchestrator | 2026-04-05 00:58:11.096670 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-04-05 00:58:11.096674 | orchestrator | Sunday 05 April 2026 00:56:23 +0000 (0:00:00.346) 0:09:34.847 ********** 2026-04-05 00:58:11.096678 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:58:11.096682 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:58:11.096685 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:58:11.096689 | orchestrator | 2026-04-05 00:58:11.096693 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-04-05 00:58:11.096697 | orchestrator | Sunday 05 April 2026 00:56:23 +0000 (0:00:00.467) 0:09:35.314 ********** 2026-04-05 00:58:11.096700 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:58:11.096704 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:58:11.096708 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:58:11.096712 | orchestrator | 2026-04-05 00:58:11.096715 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-04-05 00:58:11.096719 | orchestrator | Sunday 05 April 2026 00:56:23 +0000 (0:00:00.295) 0:09:35.610 ********** 2026-04-05 00:58:11.096723 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:58:11.096726 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:58:11.096735 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:58:11.096739 | orchestrator | 2026-04-05 00:58:11.096742 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-04-05 00:58:11.096746 | orchestrator | Sunday 05 April 2026 00:56:24 +0000 (0:00:00.298) 0:09:35.908 ********** 2026-04-05 00:58:11.096750 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:58:11.096754 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:58:11.096757 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:58:11.096761 | orchestrator | 2026-04-05 00:58:11.096765 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-04-05 00:58:11.096768 | orchestrator | Sunday 05 April 2026 00:56:24 +0000 (0:00:00.308) 0:09:36.216 ********** 2026-04-05 00:58:11.096772 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:58:11.096776 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:58:11.096779 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:58:11.096783 | orchestrator | 2026-04-05 00:58:11.096787 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-04-05 00:58:11.096790 | orchestrator | Sunday 05 April 2026 00:56:24 +0000 (0:00:00.452) 0:09:36.669 ********** 2026-04-05 00:58:11.096794 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:58:11.096798 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:58:11.096802 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:58:11.096805 | orchestrator | 2026-04-05 00:58:11.096809 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-04-05 00:58:11.096813 | orchestrator | Sunday 05 April 2026 00:56:25 +0000 (0:00:00.399) 0:09:37.069 ********** 2026-04-05 00:58:11.096816 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:58:11.096820 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:58:11.096824 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:58:11.096827 | orchestrator | 2026-04-05 00:58:11.096831 | orchestrator | TASK [ceph-mds : Include create_mds_filesystems.yml] *************************** 2026-04-05 00:58:11.096835 | orchestrator | Sunday 05 April 2026 00:56:25 +0000 (0:00:00.484) 0:09:37.553 ********** 2026-04-05 00:58:11.096839 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:58:11.096842 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:58:11.096849 | orchestrator | included: /ansible/roles/ceph-mds/tasks/create_mds_filesystems.yml for testbed-node-3 2026-04-05 00:58:11.096853 | orchestrator | 2026-04-05 00:58:11.096857 | orchestrator | TASK [ceph-facts : Get current default crush rule details] ********************* 2026-04-05 00:58:11.096861 | orchestrator | Sunday 05 April 2026 00:56:26 +0000 (0:00:00.536) 0:09:38.090 ********** 2026-04-05 00:58:11.096864 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-04-05 00:58:11.096868 | orchestrator | 2026-04-05 00:58:11.096872 | orchestrator | TASK [ceph-facts : Get current default crush rule name] ************************ 2026-04-05 00:58:11.096875 | orchestrator | Sunday 05 April 2026 00:56:28 +0000 (0:00:01.796) 0:09:39.886 ********** 2026-04-05 00:58:11.096881 | orchestrator | skipping: [testbed-node-3] => (item={'rule_id': 0, 'rule_name': 'replicated_rule', 'type': 1, 'steps': [{'op': 'take', 'item': -1, 'item_name': 'default'}, {'op': 'chooseleaf_firstn', 'num': 0, 'type': 'host'}, {'op': 'emit'}]})  2026-04-05 00:58:11.096887 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:58:11.096890 | orchestrator | 2026-04-05 00:58:11.096894 | orchestrator | TASK [ceph-mds : Create filesystem pools] ************************************** 2026-04-05 00:58:11.096898 | orchestrator | Sunday 05 April 2026 00:56:28 +0000 (0:00:00.217) 0:09:40.104 ********** 2026-04-05 00:58:11.096903 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'application': 'cephfs', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'cephfs_data', 'pg_num': 16, 'pgp_num': 16, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-04-05 00:58:11.096912 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'application': 'cephfs', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'cephfs_metadata', 'pg_num': 16, 'pgp_num': 16, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-04-05 00:58:11.096920 | orchestrator | 2026-04-05 00:58:11.096924 | orchestrator | TASK [ceph-mds : Create ceph filesystem] *************************************** 2026-04-05 00:58:11.096927 | orchestrator | Sunday 05 April 2026 00:56:35 +0000 (0:00:06.802) 0:09:46.907 ********** 2026-04-05 00:58:11.096931 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-04-05 00:58:11.096935 | orchestrator | 2026-04-05 00:58:11.096939 | orchestrator | TASK [ceph-mds : Include common.yml] ******************************************* 2026-04-05 00:58:11.096942 | orchestrator | Sunday 05 April 2026 00:56:37 +0000 (0:00:02.747) 0:09:49.654 ********** 2026-04-05 00:58:11.096948 | orchestrator | included: /ansible/roles/ceph-mds/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-05 00:58:11.096952 | orchestrator | 2026-04-05 00:58:11.096956 | orchestrator | TASK [ceph-mds : Create bootstrap-mds and mds directories] ********************* 2026-04-05 00:58:11.096960 | orchestrator | Sunday 05 April 2026 00:56:38 +0000 (0:00:00.722) 0:09:50.376 ********** 2026-04-05 00:58:11.096963 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds/) 2026-04-05 00:58:11.096967 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds/) 2026-04-05 00:58:11.096971 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds/) 2026-04-05 00:58:11.096975 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mds/ceph-testbed-node-3) 2026-04-05 00:58:11.096978 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mds/ceph-testbed-node-4) 2026-04-05 00:58:11.096982 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mds/ceph-testbed-node-5) 2026-04-05 00:58:11.096986 | orchestrator | 2026-04-05 00:58:11.096990 | orchestrator | TASK [ceph-mds : Get keys from monitors] *************************************** 2026-04-05 00:58:11.096993 | orchestrator | Sunday 05 April 2026 00:56:39 +0000 (0:00:00.992) 0:09:51.369 ********** 2026-04-05 00:58:11.096997 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-05 00:58:11.097001 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-04-05 00:58:11.097004 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-04-05 00:58:11.097008 | orchestrator | 2026-04-05 00:58:11.097012 | orchestrator | TASK [ceph-mds : Copy ceph key(s) if needed] *********************************** 2026-04-05 00:58:11.097016 | orchestrator | Sunday 05 April 2026 00:56:41 +0000 (0:00:01.661) 0:09:53.030 ********** 2026-04-05 00:58:11.097019 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-04-05 00:58:11.097023 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-04-05 00:58:11.097027 | orchestrator | changed: [testbed-node-3] 2026-04-05 00:58:11.097031 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-04-05 00:58:11.097034 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-04-05 00:58:11.097038 | orchestrator | changed: [testbed-node-4] 2026-04-05 00:58:11.097042 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-04-05 00:58:11.097046 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-04-05 00:58:11.097049 | orchestrator | changed: [testbed-node-5] 2026-04-05 00:58:11.097053 | orchestrator | 2026-04-05 00:58:11.097057 | orchestrator | TASK [ceph-mds : Create mds keyring] ******************************************* 2026-04-05 00:58:11.097061 | orchestrator | Sunday 05 April 2026 00:56:42 +0000 (0:00:01.231) 0:09:54.261 ********** 2026-04-05 00:58:11.097064 | orchestrator | changed: [testbed-node-4] 2026-04-05 00:58:11.097068 | orchestrator | changed: [testbed-node-3] 2026-04-05 00:58:11.097072 | orchestrator | changed: [testbed-node-5] 2026-04-05 00:58:11.097076 | orchestrator | 2026-04-05 00:58:11.097079 | orchestrator | TASK [ceph-mds : Non_containerized.yml] **************************************** 2026-04-05 00:58:11.097083 | orchestrator | Sunday 05 April 2026 00:56:45 +0000 (0:00:02.715) 0:09:56.977 ********** 2026-04-05 00:58:11.097087 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:58:11.097091 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:58:11.097094 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:58:11.097126 | orchestrator | 2026-04-05 00:58:11.097134 | orchestrator | TASK [ceph-mds : Containerized.yml] ******************************************** 2026-04-05 00:58:11.097138 | orchestrator | Sunday 05 April 2026 00:56:45 +0000 (0:00:00.347) 0:09:57.324 ********** 2026-04-05 00:58:11.097141 | orchestrator | included: /ansible/roles/ceph-mds/tasks/containerized.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-05 00:58:11.097145 | orchestrator | 2026-04-05 00:58:11.097149 | orchestrator | TASK [ceph-mds : Include_tasks systemd.yml] ************************************ 2026-04-05 00:58:11.097153 | orchestrator | Sunday 05 April 2026 00:56:46 +0000 (0:00:00.718) 0:09:58.042 ********** 2026-04-05 00:58:11.097156 | orchestrator | included: /ansible/roles/ceph-mds/tasks/systemd.yml for testbed-node-4, testbed-node-3, testbed-node-5 2026-04-05 00:58:11.097160 | orchestrator | 2026-04-05 00:58:11.097164 | orchestrator | TASK [ceph-mds : Generate systemd unit file] *********************************** 2026-04-05 00:58:11.097168 | orchestrator | Sunday 05 April 2026 00:56:47 +0000 (0:00:00.929) 0:09:58.972 ********** 2026-04-05 00:58:11.097171 | orchestrator | changed: [testbed-node-3] 2026-04-05 00:58:11.097175 | orchestrator | changed: [testbed-node-4] 2026-04-05 00:58:11.097179 | orchestrator | changed: [testbed-node-5] 2026-04-05 00:58:11.097182 | orchestrator | 2026-04-05 00:58:11.097186 | orchestrator | TASK [ceph-mds : Generate systemd ceph-mds target file] ************************ 2026-04-05 00:58:11.097190 | orchestrator | Sunday 05 April 2026 00:56:48 +0000 (0:00:01.457) 0:10:00.429 ********** 2026-04-05 00:58:11.097193 | orchestrator | changed: [testbed-node-3] 2026-04-05 00:58:11.097197 | orchestrator | changed: [testbed-node-4] 2026-04-05 00:58:11.097201 | orchestrator | changed: [testbed-node-5] 2026-04-05 00:58:11.097205 | orchestrator | 2026-04-05 00:58:11.097208 | orchestrator | TASK [ceph-mds : Enable ceph-mds.target] *************************************** 2026-04-05 00:58:11.097212 | orchestrator | Sunday 05 April 2026 00:56:49 +0000 (0:00:01.307) 0:10:01.737 ********** 2026-04-05 00:58:11.097216 | orchestrator | changed: [testbed-node-3] 2026-04-05 00:58:11.097219 | orchestrator | changed: [testbed-node-4] 2026-04-05 00:58:11.097223 | orchestrator | changed: [testbed-node-5] 2026-04-05 00:58:11.097227 | orchestrator | 2026-04-05 00:58:11.097230 | orchestrator | TASK [ceph-mds : Systemd start mds container] ********************************** 2026-04-05 00:58:11.097234 | orchestrator | Sunday 05 April 2026 00:56:52 +0000 (0:00:02.391) 0:10:04.129 ********** 2026-04-05 00:58:11.097238 | orchestrator | changed: [testbed-node-3] 2026-04-05 00:58:11.097242 | orchestrator | changed: [testbed-node-4] 2026-04-05 00:58:11.097245 | orchestrator | changed: [testbed-node-5] 2026-04-05 00:58:11.097249 | orchestrator | 2026-04-05 00:58:11.097253 | orchestrator | TASK [ceph-mds : Wait for mds socket to exist] ********************************* 2026-04-05 00:58:11.097256 | orchestrator | Sunday 05 April 2026 00:56:54 +0000 (0:00:02.289) 0:10:06.419 ********** 2026-04-05 00:58:11.097260 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:58:11.097264 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:58:11.097268 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:58:11.097271 | orchestrator | 2026-04-05 00:58:11.097278 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-04-05 00:58:11.097282 | orchestrator | Sunday 05 April 2026 00:56:56 +0000 (0:00:01.630) 0:10:08.049 ********** 2026-04-05 00:58:11.097286 | orchestrator | changed: [testbed-node-3] 2026-04-05 00:58:11.097289 | orchestrator | changed: [testbed-node-4] 2026-04-05 00:58:11.097293 | orchestrator | changed: [testbed-node-5] 2026-04-05 00:58:11.097297 | orchestrator | 2026-04-05 00:58:11.097300 | orchestrator | RUNNING HANDLER [ceph-handler : Mdss handler] ********************************** 2026-04-05 00:58:11.097304 | orchestrator | Sunday 05 April 2026 00:56:56 +0000 (0:00:00.692) 0:10:08.742 ********** 2026-04-05 00:58:11.097308 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mdss.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-05 00:58:11.097312 | orchestrator | 2026-04-05 00:58:11.097316 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called before restart] ******** 2026-04-05 00:58:11.097319 | orchestrator | Sunday 05 April 2026 00:56:57 +0000 (0:00:00.564) 0:10:09.306 ********** 2026-04-05 00:58:11.097326 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:58:11.097330 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:58:11.097334 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:58:11.097337 | orchestrator | 2026-04-05 00:58:11.097341 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mds restart script] *********************** 2026-04-05 00:58:11.097345 | orchestrator | Sunday 05 April 2026 00:56:57 +0000 (0:00:00.350) 0:10:09.657 ********** 2026-04-05 00:58:11.097349 | orchestrator | changed: [testbed-node-4] 2026-04-05 00:58:11.097352 | orchestrator | changed: [testbed-node-5] 2026-04-05 00:58:11.097356 | orchestrator | changed: [testbed-node-3] 2026-04-05 00:58:11.097360 | orchestrator | 2026-04-05 00:58:11.097363 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mds daemon(s)] ******************** 2026-04-05 00:58:11.097367 | orchestrator | Sunday 05 April 2026 00:56:59 +0000 (0:00:02.044) 0:10:11.701 ********** 2026-04-05 00:58:11.097371 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-05 00:58:11.097375 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-05 00:58:11.097378 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-05 00:58:11.097382 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:58:11.097386 | orchestrator | 2026-04-05 00:58:11.097389 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called after restart] ********* 2026-04-05 00:58:11.097393 | orchestrator | Sunday 05 April 2026 00:57:00 +0000 (0:00:00.807) 0:10:12.509 ********** 2026-04-05 00:58:11.097397 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:58:11.097400 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:58:11.097404 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:58:11.097408 | orchestrator | 2026-04-05 00:58:11.097412 | orchestrator | PLAY [Apply role ceph-rgw] ***************************************************** 2026-04-05 00:58:11.097415 | orchestrator | 2026-04-05 00:58:11.097419 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-04-05 00:58:11.097423 | orchestrator | Sunday 05 April 2026 00:57:01 +0000 (0:00:00.851) 0:10:13.360 ********** 2026-04-05 00:58:11.097426 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-05 00:58:11.097430 | orchestrator | 2026-04-05 00:58:11.097434 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-04-05 00:58:11.097440 | orchestrator | Sunday 05 April 2026 00:57:02 +0000 (0:00:01.139) 0:10:14.500 ********** 2026-04-05 00:58:11.097444 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-05 00:58:11.097448 | orchestrator | 2026-04-05 00:58:11.097452 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-04-05 00:58:11.097455 | orchestrator | Sunday 05 April 2026 00:57:03 +0000 (0:00:00.764) 0:10:15.264 ********** 2026-04-05 00:58:11.097459 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:58:11.097463 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:58:11.097466 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:58:11.097470 | orchestrator | 2026-04-05 00:58:11.097474 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-04-05 00:58:11.097477 | orchestrator | Sunday 05 April 2026 00:57:04 +0000 (0:00:00.783) 0:10:16.048 ********** 2026-04-05 00:58:11.097481 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:58:11.097485 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:58:11.097489 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:58:11.097492 | orchestrator | 2026-04-05 00:58:11.097496 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-04-05 00:58:11.097500 | orchestrator | Sunday 05 April 2026 00:57:05 +0000 (0:00:00.886) 0:10:16.935 ********** 2026-04-05 00:58:11.097503 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:58:11.097507 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:58:11.097511 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:58:11.097515 | orchestrator | 2026-04-05 00:58:11.097518 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-04-05 00:58:11.097525 | orchestrator | Sunday 05 April 2026 00:57:06 +0000 (0:00:01.713) 0:10:18.649 ********** 2026-04-05 00:58:11.097529 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:58:11.097533 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:58:11.097536 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:58:11.097540 | orchestrator | 2026-04-05 00:58:11.097544 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-04-05 00:58:11.097547 | orchestrator | Sunday 05 April 2026 00:57:07 +0000 (0:00:00.615) 0:10:19.264 ********** 2026-04-05 00:58:11.097551 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:58:11.097555 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:58:11.097559 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:58:11.097562 | orchestrator | 2026-04-05 00:58:11.097566 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-04-05 00:58:11.097570 | orchestrator | Sunday 05 April 2026 00:57:07 +0000 (0:00:00.507) 0:10:19.772 ********** 2026-04-05 00:58:11.097573 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:58:11.097577 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:58:11.097581 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:58:11.097585 | orchestrator | 2026-04-05 00:58:11.097588 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-04-05 00:58:11.097594 | orchestrator | Sunday 05 April 2026 00:57:08 +0000 (0:00:00.284) 0:10:20.057 ********** 2026-04-05 00:58:11.097598 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:58:11.097602 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:58:11.097606 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:58:11.097609 | orchestrator | 2026-04-05 00:58:11.097613 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-04-05 00:58:11.097617 | orchestrator | Sunday 05 April 2026 00:57:08 +0000 (0:00:00.311) 0:10:20.368 ********** 2026-04-05 00:58:11.097621 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:58:11.097624 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:58:11.097628 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:58:11.097632 | orchestrator | 2026-04-05 00:58:11.097635 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-04-05 00:58:11.097639 | orchestrator | Sunday 05 April 2026 00:57:09 +0000 (0:00:00.786) 0:10:21.155 ********** 2026-04-05 00:58:11.097643 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:58:11.097647 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:58:11.097650 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:58:11.097654 | orchestrator | 2026-04-05 00:58:11.097658 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-04-05 00:58:11.097661 | orchestrator | Sunday 05 April 2026 00:57:10 +0000 (0:00:01.059) 0:10:22.214 ********** 2026-04-05 00:58:11.097665 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:58:11.097669 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:58:11.097673 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:58:11.097676 | orchestrator | 2026-04-05 00:58:11.097680 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-04-05 00:58:11.097684 | orchestrator | Sunday 05 April 2026 00:57:10 +0000 (0:00:00.324) 0:10:22.539 ********** 2026-04-05 00:58:11.097687 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:58:11.097691 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:58:11.097695 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:58:11.097698 | orchestrator | 2026-04-05 00:58:11.097702 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-04-05 00:58:11.097706 | orchestrator | Sunday 05 April 2026 00:57:11 +0000 (0:00:00.350) 0:10:22.889 ********** 2026-04-05 00:58:11.097710 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:58:11.097713 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:58:11.097717 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:58:11.097721 | orchestrator | 2026-04-05 00:58:11.097725 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-04-05 00:58:11.097728 | orchestrator | Sunday 05 April 2026 00:57:11 +0000 (0:00:00.332) 0:10:23.222 ********** 2026-04-05 00:58:11.097735 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:58:11.097739 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:58:11.097743 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:58:11.097746 | orchestrator | 2026-04-05 00:58:11.097750 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-04-05 00:58:11.097754 | orchestrator | Sunday 05 April 2026 00:57:12 +0000 (0:00:00.658) 0:10:23.881 ********** 2026-04-05 00:58:11.097757 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:58:11.097761 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:58:11.097765 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:58:11.097768 | orchestrator | 2026-04-05 00:58:11.097772 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-04-05 00:58:11.097778 | orchestrator | Sunday 05 April 2026 00:57:12 +0000 (0:00:00.368) 0:10:24.249 ********** 2026-04-05 00:58:11.097782 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:58:11.097786 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:58:11.097790 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:58:11.097793 | orchestrator | 2026-04-05 00:58:11.097797 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-04-05 00:58:11.097801 | orchestrator | Sunday 05 April 2026 00:57:12 +0000 (0:00:00.367) 0:10:24.617 ********** 2026-04-05 00:58:11.097805 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:58:11.097808 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:58:11.097812 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:58:11.097816 | orchestrator | 2026-04-05 00:58:11.097819 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-04-05 00:58:11.097823 | orchestrator | Sunday 05 April 2026 00:57:13 +0000 (0:00:00.338) 0:10:24.955 ********** 2026-04-05 00:58:11.097827 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:58:11.097831 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:58:11.097834 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:58:11.097838 | orchestrator | 2026-04-05 00:58:11.097842 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-04-05 00:58:11.097845 | orchestrator | Sunday 05 April 2026 00:57:13 +0000 (0:00:00.573) 0:10:25.528 ********** 2026-04-05 00:58:11.097849 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:58:11.097853 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:58:11.097857 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:58:11.097860 | orchestrator | 2026-04-05 00:58:11.097864 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-04-05 00:58:11.097868 | orchestrator | Sunday 05 April 2026 00:57:14 +0000 (0:00:00.359) 0:10:25.887 ********** 2026-04-05 00:58:11.097871 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:58:11.097875 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:58:11.097879 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:58:11.097882 | orchestrator | 2026-04-05 00:58:11.097886 | orchestrator | TASK [ceph-rgw : Include common.yml] ******************************************* 2026-04-05 00:58:11.097890 | orchestrator | Sunday 05 April 2026 00:57:14 +0000 (0:00:00.558) 0:10:26.446 ********** 2026-04-05 00:58:11.097894 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-05 00:58:11.097897 | orchestrator | 2026-04-05 00:58:11.097901 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2026-04-05 00:58:11.097905 | orchestrator | Sunday 05 April 2026 00:57:15 +0000 (0:00:00.844) 0:10:27.290 ********** 2026-04-05 00:58:11.097909 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-05 00:58:11.097912 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-04-05 00:58:11.097916 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-04-05 00:58:11.097920 | orchestrator | 2026-04-05 00:58:11.097926 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2026-04-05 00:58:11.097930 | orchestrator | Sunday 05 April 2026 00:57:17 +0000 (0:00:01.907) 0:10:29.198 ********** 2026-04-05 00:58:11.097934 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-04-05 00:58:11.097940 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-04-05 00:58:11.097944 | orchestrator | changed: [testbed-node-3] 2026-04-05 00:58:11.097948 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-04-05 00:58:11.097951 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-04-05 00:58:11.097955 | orchestrator | changed: [testbed-node-4] 2026-04-05 00:58:11.097959 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-04-05 00:58:11.097962 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-04-05 00:58:11.097966 | orchestrator | changed: [testbed-node-5] 2026-04-05 00:58:11.097970 | orchestrator | 2026-04-05 00:58:11.097973 | orchestrator | TASK [ceph-rgw : Copy SSL certificate & key data to certificate path] ********** 2026-04-05 00:58:11.097977 | orchestrator | Sunday 05 April 2026 00:57:18 +0000 (0:00:01.257) 0:10:30.455 ********** 2026-04-05 00:58:11.097981 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:58:11.097984 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:58:11.097988 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:58:11.097992 | orchestrator | 2026-04-05 00:58:11.097995 | orchestrator | TASK [ceph-rgw : Include_tasks pre_requisite.yml] ****************************** 2026-04-05 00:58:11.097999 | orchestrator | Sunday 05 April 2026 00:57:18 +0000 (0:00:00.327) 0:10:30.782 ********** 2026-04-05 00:58:11.098003 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/pre_requisite.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-05 00:58:11.098007 | orchestrator | 2026-04-05 00:58:11.098010 | orchestrator | TASK [ceph-rgw : Create rados gateway directories] ***************************** 2026-04-05 00:58:11.098033 | orchestrator | Sunday 05 April 2026 00:57:19 +0000 (0:00:00.841) 0:10:31.624 ********** 2026-04-05 00:58:11.098037 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-04-05 00:58:11.098041 | orchestrator | changed: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-04-05 00:58:11.098045 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-04-05 00:58:11.098049 | orchestrator | 2026-04-05 00:58:11.098053 | orchestrator | TASK [ceph-rgw : Create rgw keyrings] ****************************************** 2026-04-05 00:58:11.098057 | orchestrator | Sunday 05 April 2026 00:57:20 +0000 (0:00:00.901) 0:10:32.525 ********** 2026-04-05 00:58:11.098061 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-05 00:58:11.098064 | orchestrator | changed: [testbed-node-3 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2026-04-05 00:58:11.098071 | orchestrator | changed: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-05 00:58:11.098075 | orchestrator | changed: [testbed-node-4 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2026-04-05 00:58:11.098079 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-05 00:58:11.098083 | orchestrator | changed: [testbed-node-5 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2026-04-05 00:58:11.098086 | orchestrator | 2026-04-05 00:58:11.098090 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2026-04-05 00:58:11.098094 | orchestrator | Sunday 05 April 2026 00:57:24 +0000 (0:00:03.832) 0:10:36.358 ********** 2026-04-05 00:58:11.098098 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-05 00:58:11.098112 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-04-05 00:58:11.098116 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-05 00:58:11.098120 | orchestrator | ok: [testbed-node-4 -> {{ groups.get(mon_group_name)[0] }}] 2026-04-05 00:58:11.098127 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-05 00:58:11.098131 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2026-04-05 00:58:11.098135 | orchestrator | 2026-04-05 00:58:11.098139 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2026-04-05 00:58:11.098142 | orchestrator | Sunday 05 April 2026 00:57:26 +0000 (0:00:02.376) 0:10:38.735 ********** 2026-04-05 00:58:11.098146 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-04-05 00:58:11.098150 | orchestrator | changed: [testbed-node-3] 2026-04-05 00:58:11.098153 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-04-05 00:58:11.098157 | orchestrator | changed: [testbed-node-4] 2026-04-05 00:58:11.098161 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-04-05 00:58:11.098165 | orchestrator | changed: [testbed-node-5] 2026-04-05 00:58:11.098168 | orchestrator | 2026-04-05 00:58:11.098172 | orchestrator | TASK [ceph-rgw : Rgw pool creation tasks] ************************************** 2026-04-05 00:58:11.098176 | orchestrator | Sunday 05 April 2026 00:57:28 +0000 (0:00:01.314) 0:10:40.050 ********** 2026-04-05 00:58:11.098180 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/rgw_create_pools.yml for testbed-node-3 2026-04-05 00:58:11.098183 | orchestrator | 2026-04-05 00:58:11.098187 | orchestrator | TASK [ceph-rgw : Create ec profile] ******************************************** 2026-04-05 00:58:11.098191 | orchestrator | Sunday 05 April 2026 00:57:28 +0000 (0:00:00.261) 0:10:40.312 ********** 2026-04-05 00:58:11.098197 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-05 00:58:11.098202 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-05 00:58:11.098205 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-05 00:58:11.098209 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-05 00:58:11.098213 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-05 00:58:11.098217 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:58:11.098221 | orchestrator | 2026-04-05 00:58:11.098224 | orchestrator | TASK [ceph-rgw : Set crush rule] *********************************************** 2026-04-05 00:58:11.098228 | orchestrator | Sunday 05 April 2026 00:57:29 +0000 (0:00:00.605) 0:10:40.917 ********** 2026-04-05 00:58:11.098232 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-05 00:58:11.098236 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-05 00:58:11.098239 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-05 00:58:11.098243 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-05 00:58:11.098247 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-05 00:58:11.098251 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:58:11.098254 | orchestrator | 2026-04-05 00:58:11.098258 | orchestrator | TASK [ceph-rgw : Create rgw pools] ********************************************* 2026-04-05 00:58:11.098262 | orchestrator | Sunday 05 April 2026 00:57:30 +0000 (0:00:01.024) 0:10:41.941 ********** 2026-04-05 00:58:11.098266 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-04-05 00:58:11.098269 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-04-05 00:58:11.098278 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-04-05 00:58:11.098282 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-04-05 00:58:11.098286 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-04-05 00:58:11.098290 | orchestrator | 2026-04-05 00:58:11.098293 | orchestrator | TASK [ceph-rgw : Include_tasks openstack-keystone.yml] ************************* 2026-04-05 00:58:11.098297 | orchestrator | Sunday 05 April 2026 00:57:55 +0000 (0:00:25.568) 0:11:07.509 ********** 2026-04-05 00:58:11.098301 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:58:11.098305 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:58:11.098308 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:58:11.098312 | orchestrator | 2026-04-05 00:58:11.098316 | orchestrator | TASK [ceph-rgw : Include_tasks start_radosgw.yml] ****************************** 2026-04-05 00:58:11.098320 | orchestrator | Sunday 05 April 2026 00:57:56 +0000 (0:00:00.612) 0:11:08.121 ********** 2026-04-05 00:58:11.098323 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:58:11.098327 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:58:11.098331 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:58:11.098334 | orchestrator | 2026-04-05 00:58:11.098338 | orchestrator | TASK [ceph-rgw : Include start_docker_rgw.yml] ********************************* 2026-04-05 00:58:11.098342 | orchestrator | Sunday 05 April 2026 00:57:56 +0000 (0:00:00.334) 0:11:08.456 ********** 2026-04-05 00:58:11.098346 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/start_docker_rgw.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-05 00:58:11.098349 | orchestrator | 2026-04-05 00:58:11.098353 | orchestrator | TASK [ceph-rgw : Include_task systemd.yml] ************************************* 2026-04-05 00:58:11.098357 | orchestrator | Sunday 05 April 2026 00:57:57 +0000 (0:00:00.581) 0:11:09.037 ********** 2026-04-05 00:58:11.098361 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-05 00:58:11.098364 | orchestrator | 2026-04-05 00:58:11.098368 | orchestrator | TASK [ceph-rgw : Generate systemd unit file] *********************************** 2026-04-05 00:58:11.098372 | orchestrator | Sunday 05 April 2026 00:57:58 +0000 (0:00:00.838) 0:11:09.876 ********** 2026-04-05 00:58:11.098375 | orchestrator | changed: [testbed-node-3] 2026-04-05 00:58:11.098379 | orchestrator | changed: [testbed-node-4] 2026-04-05 00:58:11.098383 | orchestrator | changed: [testbed-node-5] 2026-04-05 00:58:11.098386 | orchestrator | 2026-04-05 00:58:11.098390 | orchestrator | TASK [ceph-rgw : Generate systemd ceph-radosgw target file] ******************** 2026-04-05 00:58:11.098396 | orchestrator | Sunday 05 April 2026 00:57:59 +0000 (0:00:01.288) 0:11:11.164 ********** 2026-04-05 00:58:11.098400 | orchestrator | changed: [testbed-node-3] 2026-04-05 00:58:11.098404 | orchestrator | changed: [testbed-node-4] 2026-04-05 00:58:11.098407 | orchestrator | changed: [testbed-node-5] 2026-04-05 00:58:11.098411 | orchestrator | 2026-04-05 00:58:11.098415 | orchestrator | TASK [ceph-rgw : Enable ceph-radosgw.target] *********************************** 2026-04-05 00:58:11.098419 | orchestrator | Sunday 05 April 2026 00:58:00 +0000 (0:00:01.259) 0:11:12.424 ********** 2026-04-05 00:58:11.098422 | orchestrator | changed: [testbed-node-3] 2026-04-05 00:58:11.098426 | orchestrator | changed: [testbed-node-5] 2026-04-05 00:58:11.098430 | orchestrator | changed: [testbed-node-4] 2026-04-05 00:58:11.098433 | orchestrator | 2026-04-05 00:58:11.098437 | orchestrator | TASK [ceph-rgw : Systemd start rgw container] ********************************** 2026-04-05 00:58:11.098441 | orchestrator | Sunday 05 April 2026 00:58:02 +0000 (0:00:01.934) 0:11:14.358 ********** 2026-04-05 00:58:11.098445 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-04-05 00:58:11.098452 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-04-05 00:58:11.098456 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-04-05 00:58:11.098460 | orchestrator | 2026-04-05 00:58:11.098464 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-04-05 00:58:11.098468 | orchestrator | Sunday 05 April 2026 00:58:05 +0000 (0:00:02.878) 0:11:17.236 ********** 2026-04-05 00:58:11.098471 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:58:11.098475 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:58:11.098479 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:58:11.098482 | orchestrator | 2026-04-05 00:58:11.098486 | orchestrator | RUNNING HANDLER [ceph-handler : Rgws handler] ********************************** 2026-04-05 00:58:11.098490 | orchestrator | Sunday 05 April 2026 00:58:05 +0000 (0:00:00.354) 0:11:17.591 ********** 2026-04-05 00:58:11.098494 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_rgws.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-05 00:58:11.098497 | orchestrator | 2026-04-05 00:58:11.098501 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called before restart] ******** 2026-04-05 00:58:11.098505 | orchestrator | Sunday 05 April 2026 00:58:06 +0000 (0:00:00.874) 0:11:18.465 ********** 2026-04-05 00:58:11.098509 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:58:11.098512 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:58:11.098516 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:58:11.098520 | orchestrator | 2026-04-05 00:58:11.098524 | orchestrator | RUNNING HANDLER [ceph-handler : Copy rgw restart script] *********************** 2026-04-05 00:58:11.098527 | orchestrator | Sunday 05 April 2026 00:58:07 +0000 (0:00:00.327) 0:11:18.793 ********** 2026-04-05 00:58:11.098531 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:58:11.098535 | orchestrator | skipping: [testbed-node-4] 2026-04-05 00:58:11.098538 | orchestrator | skipping: [testbed-node-5] 2026-04-05 00:58:11.098542 | orchestrator | 2026-04-05 00:58:11.098546 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph rgw daemon(s)] ******************** 2026-04-05 00:58:11.098552 | orchestrator | Sunday 05 April 2026 00:58:07 +0000 (0:00:00.397) 0:11:19.190 ********** 2026-04-05 00:58:11.098556 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-05 00:58:11.098560 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-05 00:58:11.098563 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-05 00:58:11.098567 | orchestrator | skipping: [testbed-node-3] 2026-04-05 00:58:11.098571 | orchestrator | 2026-04-05 00:58:11.098575 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called after restart] ********* 2026-04-05 00:58:11.098578 | orchestrator | Sunday 05 April 2026 00:58:08 +0000 (0:00:01.264) 0:11:20.455 ********** 2026-04-05 00:58:11.098582 | orchestrator | ok: [testbed-node-3] 2026-04-05 00:58:11.098586 | orchestrator | ok: [testbed-node-4] 2026-04-05 00:58:11.098589 | orchestrator | ok: [testbed-node-5] 2026-04-05 00:58:11.098593 | orchestrator | 2026-04-05 00:58:11.098597 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-05 00:58:11.098601 | orchestrator | testbed-node-0 : ok=134  changed=35  unreachable=0 failed=0 skipped=125  rescued=0 ignored=0 2026-04-05 00:58:11.098605 | orchestrator | testbed-node-1 : ok=127  changed=31  unreachable=0 failed=0 skipped=120  rescued=0 ignored=0 2026-04-05 00:58:11.098608 | orchestrator | testbed-node-2 : ok=134  changed=33  unreachable=0 failed=0 skipped=119  rescued=0 ignored=0 2026-04-05 00:58:11.098612 | orchestrator | testbed-node-3 : ok=193  changed=45  unreachable=0 failed=0 skipped=162  rescued=0 ignored=0 2026-04-05 00:58:11.098619 | orchestrator | testbed-node-4 : ok=175  changed=40  unreachable=0 failed=0 skipped=123  rescued=0 ignored=0 2026-04-05 00:58:11.098623 | orchestrator | testbed-node-5 : ok=177  changed=41  unreachable=0 failed=0 skipped=121  rescued=0 ignored=0 2026-04-05 00:58:11.098626 | orchestrator | 2026-04-05 00:58:11.098630 | orchestrator | 2026-04-05 00:58:11.098634 | orchestrator | 2026-04-05 00:58:11.098638 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-05 00:58:11.098642 | orchestrator | Sunday 05 April 2026 00:58:08 +0000 (0:00:00.252) 0:11:20.708 ********** 2026-04-05 00:58:11.098645 | orchestrator | =============================================================================== 2026-04-05 00:58:11.098652 | orchestrator | ceph-container-common : Pulling Ceph container image ------------------- 70.61s 2026-04-05 00:58:11.098656 | orchestrator | ceph-osd : Use ceph-volume to create osds ------------------------------ 35.75s 2026-04-05 00:58:11.098659 | orchestrator | ceph-rgw : Create rgw pools -------------------------------------------- 25.57s 2026-04-05 00:58:11.098663 | orchestrator | ceph-mon : Waiting for the monitor(s) to form the quorum... ------------ 21.60s 2026-04-05 00:58:11.098667 | orchestrator | ceph-mgr : Wait for all mgr to be up ----------------------------------- 13.60s 2026-04-05 00:58:11.098671 | orchestrator | ceph-osd : Wait for all osd to be up ----------------------------------- 13.23s 2026-04-05 00:58:11.098674 | orchestrator | ceph-mon : Set cluster configs ----------------------------------------- 11.65s 2026-04-05 00:58:11.098678 | orchestrator | ceph-mgr : Create ceph mgr keyring(s) on a mon node --------------------- 8.81s 2026-04-05 00:58:11.098682 | orchestrator | ceph-config : Create ceph initial directories --------------------------- 7.99s 2026-04-05 00:58:11.098686 | orchestrator | ceph-mds : Create filesystem pools -------------------------------------- 6.80s 2026-04-05 00:58:11.098689 | orchestrator | ceph-mon : Fetch ceph initial keys -------------------------------------- 6.66s 2026-04-05 00:58:11.098693 | orchestrator | ceph-mgr : Disable ceph mgr enabled modules ----------------------------- 6.10s 2026-04-05 00:58:11.098697 | orchestrator | ceph-facts : Set_fact _monitor_addresses - ipv4 ------------------------- 5.19s 2026-04-05 00:58:11.098701 | orchestrator | ceph-mgr : Add modules to ceph-mgr -------------------------------------- 4.74s 2026-04-05 00:58:11.098704 | orchestrator | ceph-mon : Copy admin keyring over to mons ------------------------------ 4.31s 2026-04-05 00:58:11.098708 | orchestrator | ceph-rgw : Create rgw keyrings ------------------------------------------ 3.83s 2026-04-05 00:58:11.098712 | orchestrator | ceph-osd : Systemd start osd -------------------------------------------- 3.65s 2026-04-05 00:58:11.098716 | orchestrator | ceph-crash : Start the ceph-crash service ------------------------------- 3.39s 2026-04-05 00:58:11.098719 | orchestrator | ceph-crash : Create client.crash keyring -------------------------------- 3.30s 2026-04-05 00:58:11.098723 | orchestrator | ceph-facts : Set_fact _container_exec_cmd ------------------------------- 3.13s 2026-04-05 00:58:11.098727 | orchestrator | 2026-04-05 00:58:11 | INFO  | Task b9f9fe4a-095a-46c2-a3d1-d49948e77915 is in state STARTED 2026-04-05 00:58:11.098731 | orchestrator | 2026-04-05 00:58:11 | INFO  | Task 9daa912a-da73-4379-86f2-1d23972e1467 is in state STARTED 2026-04-05 00:58:11.098735 | orchestrator | 2026-04-05 00:58:11 | INFO  | Task 96b6a315-d457-49fb-b111-e531fff1043c is in state STARTED 2026-04-05 00:58:11.098738 | orchestrator | 2026-04-05 00:58:11 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:58:14.125035 | orchestrator | 2026-04-05 00:58:14 | INFO  | Task b9f9fe4a-095a-46c2-a3d1-d49948e77915 is in state STARTED 2026-04-05 00:58:14.125847 | orchestrator | 2026-04-05 00:58:14 | INFO  | Task 9daa912a-da73-4379-86f2-1d23972e1467 is in state STARTED 2026-04-05 00:58:14.127151 | orchestrator | 2026-04-05 00:58:14 | INFO  | Task 96b6a315-d457-49fb-b111-e531fff1043c is in state STARTED 2026-04-05 00:58:14.127198 | orchestrator | 2026-04-05 00:58:14 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:58:17.179631 | orchestrator | 2026-04-05 00:58:17 | INFO  | Task b9f9fe4a-095a-46c2-a3d1-d49948e77915 is in state STARTED 2026-04-05 00:58:17.183438 | orchestrator | 2026-04-05 00:58:17 | INFO  | Task 9daa912a-da73-4379-86f2-1d23972e1467 is in state STARTED 2026-04-05 00:58:17.185001 | orchestrator | 2026-04-05 00:58:17 | INFO  | Task 96b6a315-d457-49fb-b111-e531fff1043c is in state STARTED 2026-04-05 00:58:17.185229 | orchestrator | 2026-04-05 00:58:17 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:58:20.234471 | orchestrator | 2026-04-05 00:58:20 | INFO  | Task b9f9fe4a-095a-46c2-a3d1-d49948e77915 is in state STARTED 2026-04-05 00:58:20.238285 | orchestrator | 2026-04-05 00:58:20 | INFO  | Task 9daa912a-da73-4379-86f2-1d23972e1467 is in state STARTED 2026-04-05 00:58:20.239810 | orchestrator | 2026-04-05 00:58:20 | INFO  | Task 96b6a315-d457-49fb-b111-e531fff1043c is in state STARTED 2026-04-05 00:58:20.239848 | orchestrator | 2026-04-05 00:58:20 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:58:23.288707 | orchestrator | 2026-04-05 00:58:23 | INFO  | Task b9f9fe4a-095a-46c2-a3d1-d49948e77915 is in state STARTED 2026-04-05 00:58:23.290858 | orchestrator | 2026-04-05 00:58:23 | INFO  | Task 9daa912a-da73-4379-86f2-1d23972e1467 is in state STARTED 2026-04-05 00:58:23.293340 | orchestrator | 2026-04-05 00:58:23 | INFO  | Task 96b6a315-d457-49fb-b111-e531fff1043c is in state STARTED 2026-04-05 00:58:23.293379 | orchestrator | 2026-04-05 00:58:23 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:58:26.349667 | orchestrator | 2026-04-05 00:58:26 | INFO  | Task b9f9fe4a-095a-46c2-a3d1-d49948e77915 is in state STARTED 2026-04-05 00:58:26.352316 | orchestrator | 2026-04-05 00:58:26 | INFO  | Task 9daa912a-da73-4379-86f2-1d23972e1467 is in state STARTED 2026-04-05 00:58:26.354193 | orchestrator | 2026-04-05 00:58:26 | INFO  | Task 96b6a315-d457-49fb-b111-e531fff1043c is in state STARTED 2026-04-05 00:58:26.354252 | orchestrator | 2026-04-05 00:58:26 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:58:29.400309 | orchestrator | 2026-04-05 00:58:29 | INFO  | Task b9f9fe4a-095a-46c2-a3d1-d49948e77915 is in state STARTED 2026-04-05 00:58:29.402324 | orchestrator | 2026-04-05 00:58:29 | INFO  | Task 9daa912a-da73-4379-86f2-1d23972e1467 is in state STARTED 2026-04-05 00:58:29.404346 | orchestrator | 2026-04-05 00:58:29 | INFO  | Task 96b6a315-d457-49fb-b111-e531fff1043c is in state STARTED 2026-04-05 00:58:29.404390 | orchestrator | 2026-04-05 00:58:29 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:58:32.449890 | orchestrator | 2026-04-05 00:58:32 | INFO  | Task b9f9fe4a-095a-46c2-a3d1-d49948e77915 is in state STARTED 2026-04-05 00:58:32.451128 | orchestrator | 2026-04-05 00:58:32 | INFO  | Task 9daa912a-da73-4379-86f2-1d23972e1467 is in state STARTED 2026-04-05 00:58:32.452271 | orchestrator | 2026-04-05 00:58:32 | INFO  | Task 96b6a315-d457-49fb-b111-e531fff1043c is in state STARTED 2026-04-05 00:58:32.452307 | orchestrator | 2026-04-05 00:58:32 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:58:35.505294 | orchestrator | 2026-04-05 00:58:35 | INFO  | Task b9f9fe4a-095a-46c2-a3d1-d49948e77915 is in state STARTED 2026-04-05 00:58:35.507104 | orchestrator | 2026-04-05 00:58:35 | INFO  | Task 9daa912a-da73-4379-86f2-1d23972e1467 is in state STARTED 2026-04-05 00:58:35.509864 | orchestrator | 2026-04-05 00:58:35 | INFO  | Task 96b6a315-d457-49fb-b111-e531fff1043c is in state STARTED 2026-04-05 00:58:35.509885 | orchestrator | 2026-04-05 00:58:35 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:58:38.556437 | orchestrator | 2026-04-05 00:58:38 | INFO  | Task b9f9fe4a-095a-46c2-a3d1-d49948e77915 is in state STARTED 2026-04-05 00:58:38.562393 | orchestrator | 2026-04-05 00:58:38 | INFO  | Task 9daa912a-da73-4379-86f2-1d23972e1467 is in state STARTED 2026-04-05 00:58:38.564415 | orchestrator | 2026-04-05 00:58:38 | INFO  | Task 96b6a315-d457-49fb-b111-e531fff1043c is in state STARTED 2026-04-05 00:58:38.564645 | orchestrator | 2026-04-05 00:58:38 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:58:41.615971 | orchestrator | 2026-04-05 00:58:41 | INFO  | Task b9f9fe4a-095a-46c2-a3d1-d49948e77915 is in state STARTED 2026-04-05 00:58:41.616581 | orchestrator | 2026-04-05 00:58:41 | INFO  | Task 9daa912a-da73-4379-86f2-1d23972e1467 is in state STARTED 2026-04-05 00:58:41.617594 | orchestrator | 2026-04-05 00:58:41 | INFO  | Task 96b6a315-d457-49fb-b111-e531fff1043c is in state STARTED 2026-04-05 00:58:41.617641 | orchestrator | 2026-04-05 00:58:41 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:58:44.665278 | orchestrator | 2026-04-05 00:58:44 | INFO  | Task b9f9fe4a-095a-46c2-a3d1-d49948e77915 is in state STARTED 2026-04-05 00:58:44.668373 | orchestrator | 2026-04-05 00:58:44 | INFO  | Task 9daa912a-da73-4379-86f2-1d23972e1467 is in state STARTED 2026-04-05 00:58:44.670624 | orchestrator | 2026-04-05 00:58:44 | INFO  | Task 96b6a315-d457-49fb-b111-e531fff1043c is in state STARTED 2026-04-05 00:58:44.670726 | orchestrator | 2026-04-05 00:58:44 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:58:47.728664 | orchestrator | 2026-04-05 00:58:47 | INFO  | Task b9f9fe4a-095a-46c2-a3d1-d49948e77915 is in state STARTED 2026-04-05 00:58:47.730557 | orchestrator | 2026-04-05 00:58:47 | INFO  | Task 9daa912a-da73-4379-86f2-1d23972e1467 is in state STARTED 2026-04-05 00:58:47.733181 | orchestrator | 2026-04-05 00:58:47 | INFO  | Task 96b6a315-d457-49fb-b111-e531fff1043c is in state STARTED 2026-04-05 00:58:47.733229 | orchestrator | 2026-04-05 00:58:47 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:58:50.779999 | orchestrator | 2026-04-05 00:58:50 | INFO  | Task b9f9fe4a-095a-46c2-a3d1-d49948e77915 is in state STARTED 2026-04-05 00:58:50.783332 | orchestrator | 2026-04-05 00:58:50 | INFO  | Task 9daa912a-da73-4379-86f2-1d23972e1467 is in state STARTED 2026-04-05 00:58:50.785186 | orchestrator | 2026-04-05 00:58:50 | INFO  | Task 96b6a315-d457-49fb-b111-e531fff1043c is in state STARTED 2026-04-05 00:58:50.785232 | orchestrator | 2026-04-05 00:58:50 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:58:53.833592 | orchestrator | 2026-04-05 00:58:53 | INFO  | Task b9f9fe4a-095a-46c2-a3d1-d49948e77915 is in state STARTED 2026-04-05 00:58:53.836109 | orchestrator | 2026-04-05 00:58:53 | INFO  | Task 9daa912a-da73-4379-86f2-1d23972e1467 is in state STARTED 2026-04-05 00:58:53.838769 | orchestrator | 2026-04-05 00:58:53 | INFO  | Task 96b6a315-d457-49fb-b111-e531fff1043c is in state STARTED 2026-04-05 00:58:53.838838 | orchestrator | 2026-04-05 00:58:53 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:58:56.889787 | orchestrator | 2026-04-05 00:58:56 | INFO  | Task b9f9fe4a-095a-46c2-a3d1-d49948e77915 is in state STARTED 2026-04-05 00:58:56.892519 | orchestrator | 2026-04-05 00:58:56 | INFO  | Task 9daa912a-da73-4379-86f2-1d23972e1467 is in state STARTED 2026-04-05 00:58:56.895318 | orchestrator | 2026-04-05 00:58:56 | INFO  | Task 96b6a315-d457-49fb-b111-e531fff1043c is in state STARTED 2026-04-05 00:58:56.895366 | orchestrator | 2026-04-05 00:58:56 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:58:59.938560 | orchestrator | 2026-04-05 00:58:59 | INFO  | Task b9f9fe4a-095a-46c2-a3d1-d49948e77915 is in state STARTED 2026-04-05 00:58:59.940404 | orchestrator | 2026-04-05 00:58:59 | INFO  | Task 9daa912a-da73-4379-86f2-1d23972e1467 is in state STARTED 2026-04-05 00:58:59.942082 | orchestrator | 2026-04-05 00:58:59 | INFO  | Task 96b6a315-d457-49fb-b111-e531fff1043c is in state STARTED 2026-04-05 00:58:59.942111 | orchestrator | 2026-04-05 00:58:59 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:59:02.993365 | orchestrator | 2026-04-05 00:59:02 | INFO  | Task b9f9fe4a-095a-46c2-a3d1-d49948e77915 is in state STARTED 2026-04-05 00:59:02.994341 | orchestrator | 2026-04-05 00:59:02 | INFO  | Task 9daa912a-da73-4379-86f2-1d23972e1467 is in state STARTED 2026-04-05 00:59:02.996963 | orchestrator | 2026-04-05 00:59:02 | INFO  | Task 96b6a315-d457-49fb-b111-e531fff1043c is in state STARTED 2026-04-05 00:59:02.997005 | orchestrator | 2026-04-05 00:59:02 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:59:06.037935 | orchestrator | 2026-04-05 00:59:06 | INFO  | Task b9f9fe4a-095a-46c2-a3d1-d49948e77915 is in state STARTED 2026-04-05 00:59:06.039266 | orchestrator | 2026-04-05 00:59:06 | INFO  | Task 9daa912a-da73-4379-86f2-1d23972e1467 is in state STARTED 2026-04-05 00:59:06.041601 | orchestrator | 2026-04-05 00:59:06 | INFO  | Task 96b6a315-d457-49fb-b111-e531fff1043c is in state STARTED 2026-04-05 00:59:06.041641 | orchestrator | 2026-04-05 00:59:06 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:59:09.096891 | orchestrator | 2026-04-05 00:59:09 | INFO  | Task b9f9fe4a-095a-46c2-a3d1-d49948e77915 is in state STARTED 2026-04-05 00:59:09.098640 | orchestrator | 2026-04-05 00:59:09 | INFO  | Task 9daa912a-da73-4379-86f2-1d23972e1467 is in state STARTED 2026-04-05 00:59:09.100722 | orchestrator | 2026-04-05 00:59:09 | INFO  | Task 96b6a315-d457-49fb-b111-e531fff1043c is in state STARTED 2026-04-05 00:59:09.100751 | orchestrator | 2026-04-05 00:59:09 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:59:12.159035 | orchestrator | 2026-04-05 00:59:12 | INFO  | Task b9f9fe4a-095a-46c2-a3d1-d49948e77915 is in state STARTED 2026-04-05 00:59:12.161819 | orchestrator | 2026-04-05 00:59:12 | INFO  | Task 9daa912a-da73-4379-86f2-1d23972e1467 is in state STARTED 2026-04-05 00:59:12.164075 | orchestrator | 2026-04-05 00:59:12 | INFO  | Task 96b6a315-d457-49fb-b111-e531fff1043c is in state STARTED 2026-04-05 00:59:12.164144 | orchestrator | 2026-04-05 00:59:12 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:59:15.219351 | orchestrator | 2026-04-05 00:59:15 | INFO  | Task b9f9fe4a-095a-46c2-a3d1-d49948e77915 is in state STARTED 2026-04-05 00:59:15.223704 | orchestrator | 2026-04-05 00:59:15 | INFO  | Task 9daa912a-da73-4379-86f2-1d23972e1467 is in state STARTED 2026-04-05 00:59:15.228080 | orchestrator | 2026-04-05 00:59:15 | INFO  | Task 96b6a315-d457-49fb-b111-e531fff1043c is in state STARTED 2026-04-05 00:59:15.228578 | orchestrator | 2026-04-05 00:59:15 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:59:18.279502 | orchestrator | 2026-04-05 00:59:18 | INFO  | Task b9f9fe4a-095a-46c2-a3d1-d49948e77915 is in state STARTED 2026-04-05 00:59:18.280688 | orchestrator | 2026-04-05 00:59:18 | INFO  | Task 9daa912a-da73-4379-86f2-1d23972e1467 is in state STARTED 2026-04-05 00:59:18.282768 | orchestrator | 2026-04-05 00:59:18 | INFO  | Task 96b6a315-d457-49fb-b111-e531fff1043c is in state STARTED 2026-04-05 00:59:18.282786 | orchestrator | 2026-04-05 00:59:18 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:59:21.334491 | orchestrator | 2026-04-05 00:59:21 | INFO  | Task b9f9fe4a-095a-46c2-a3d1-d49948e77915 is in state STARTED 2026-04-05 00:59:21.336182 | orchestrator | 2026-04-05 00:59:21 | INFO  | Task 9daa912a-da73-4379-86f2-1d23972e1467 is in state STARTED 2026-04-05 00:59:21.338353 | orchestrator | 2026-04-05 00:59:21 | INFO  | Task 96b6a315-d457-49fb-b111-e531fff1043c is in state STARTED 2026-04-05 00:59:21.338417 | orchestrator | 2026-04-05 00:59:21 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:59:24.386385 | orchestrator | 2026-04-05 00:59:24 | INFO  | Task b9f9fe4a-095a-46c2-a3d1-d49948e77915 is in state STARTED 2026-04-05 00:59:24.389289 | orchestrator | 2026-04-05 00:59:24 | INFO  | Task 9daa912a-da73-4379-86f2-1d23972e1467 is in state STARTED 2026-04-05 00:59:24.391276 | orchestrator | 2026-04-05 00:59:24 | INFO  | Task 96b6a315-d457-49fb-b111-e531fff1043c is in state STARTED 2026-04-05 00:59:24.391327 | orchestrator | 2026-04-05 00:59:24 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:59:27.445159 | orchestrator | 2026-04-05 00:59:27 | INFO  | Task b9f9fe4a-095a-46c2-a3d1-d49948e77915 is in state STARTED 2026-04-05 00:59:27.448329 | orchestrator | 2026-04-05 00:59:27 | INFO  | Task 9daa912a-da73-4379-86f2-1d23972e1467 is in state STARTED 2026-04-05 00:59:27.451252 | orchestrator | 2026-04-05 00:59:27 | INFO  | Task 96b6a315-d457-49fb-b111-e531fff1043c is in state STARTED 2026-04-05 00:59:27.451300 | orchestrator | 2026-04-05 00:59:27 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:59:30.500343 | orchestrator | 2026-04-05 00:59:30 | INFO  | Task b9f9fe4a-095a-46c2-a3d1-d49948e77915 is in state STARTED 2026-04-05 00:59:30.501853 | orchestrator | 2026-04-05 00:59:30 | INFO  | Task 9daa912a-da73-4379-86f2-1d23972e1467 is in state STARTED 2026-04-05 00:59:30.504533 | orchestrator | 2026-04-05 00:59:30 | INFO  | Task 96b6a315-d457-49fb-b111-e531fff1043c is in state STARTED 2026-04-05 00:59:30.504611 | orchestrator | 2026-04-05 00:59:30 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:59:33.553171 | orchestrator | 2026-04-05 00:59:33 | INFO  | Task b9f9fe4a-095a-46c2-a3d1-d49948e77915 is in state STARTED 2026-04-05 00:59:33.558861 | orchestrator | 2026-04-05 00:59:33 | INFO  | Task 9daa912a-da73-4379-86f2-1d23972e1467 is in state STARTED 2026-04-05 00:59:33.563878 | orchestrator | 2026-04-05 00:59:33 | INFO  | Task 96b6a315-d457-49fb-b111-e531fff1043c is in state STARTED 2026-04-05 00:59:33.563935 | orchestrator | 2026-04-05 00:59:33 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:59:36.616220 | orchestrator | 2026-04-05 00:59:36 | INFO  | Task b9f9fe4a-095a-46c2-a3d1-d49948e77915 is in state STARTED 2026-04-05 00:59:36.617608 | orchestrator | 2026-04-05 00:59:36 | INFO  | Task 9daa912a-da73-4379-86f2-1d23972e1467 is in state STARTED 2026-04-05 00:59:36.619498 | orchestrator | 2026-04-05 00:59:36 | INFO  | Task 96b6a315-d457-49fb-b111-e531fff1043c is in state STARTED 2026-04-05 00:59:36.619553 | orchestrator | 2026-04-05 00:59:36 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:59:39.665284 | orchestrator | 2026-04-05 00:59:39 | INFO  | Task b9f9fe4a-095a-46c2-a3d1-d49948e77915 is in state STARTED 2026-04-05 00:59:39.667516 | orchestrator | 2026-04-05 00:59:39 | INFO  | Task 9daa912a-da73-4379-86f2-1d23972e1467 is in state STARTED 2026-04-05 00:59:39.668973 | orchestrator | 2026-04-05 00:59:39 | INFO  | Task 96b6a315-d457-49fb-b111-e531fff1043c is in state STARTED 2026-04-05 00:59:39.669009 | orchestrator | 2026-04-05 00:59:39 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:59:42.721224 | orchestrator | 2026-04-05 00:59:42 | INFO  | Task b9f9fe4a-095a-46c2-a3d1-d49948e77915 is in state STARTED 2026-04-05 00:59:42.724970 | orchestrator | 2026-04-05 00:59:42 | INFO  | Task 9daa912a-da73-4379-86f2-1d23972e1467 is in state STARTED 2026-04-05 00:59:42.728111 | orchestrator | 2026-04-05 00:59:42 | INFO  | Task 96b6a315-d457-49fb-b111-e531fff1043c is in state STARTED 2026-04-05 00:59:42.728905 | orchestrator | 2026-04-05 00:59:42 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:59:45.779673 | orchestrator | 2026-04-05 00:59:45 | INFO  | Task daeb651d-59f5-4d09-a822-e33f38a3d3e8 is in state STARTED 2026-04-05 00:59:45.781319 | orchestrator | 2026-04-05 00:59:45 | INFO  | Task b9f9fe4a-095a-46c2-a3d1-d49948e77915 is in state STARTED 2026-04-05 00:59:45.784872 | orchestrator | 2026-04-05 00:59:45 | INFO  | Task 9daa912a-da73-4379-86f2-1d23972e1467 is in state SUCCESS 2026-04-05 00:59:45.786751 | orchestrator | 2026-04-05 00:59:45.786827 | orchestrator | 2026-04-05 00:59:45.786843 | orchestrator | PLAY [Set kolla_action_mariadb] ************************************************ 2026-04-05 00:59:45.786856 | orchestrator | 2026-04-05 00:59:45.787238 | orchestrator | TASK [Inform the user about the following task] ******************************** 2026-04-05 00:59:45.787252 | orchestrator | Sunday 05 April 2026 00:56:34 +0000 (0:00:00.128) 0:00:00.128 ********** 2026-04-05 00:59:45.787264 | orchestrator | ok: [localhost] => { 2026-04-05 00:59:45.787287 | orchestrator |  "msg": "The task 'Check MariaDB service' fails if the MariaDB service has not yet been deployed. This is fine." 2026-04-05 00:59:45.787307 | orchestrator | } 2026-04-05 00:59:45.787328 | orchestrator | 2026-04-05 00:59:45.787348 | orchestrator | TASK [Check MariaDB service] *************************************************** 2026-04-05 00:59:45.787368 | orchestrator | Sunday 05 April 2026 00:56:34 +0000 (0:00:00.048) 0:00:00.177 ********** 2026-04-05 00:59:45.787389 | orchestrator | fatal: [localhost]: FAILED! => {"changed": false, "elapsed": 2, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.9:3306"} 2026-04-05 00:59:45.787411 | orchestrator | ...ignoring 2026-04-05 00:59:45.787429 | orchestrator | 2026-04-05 00:59:45.787449 | orchestrator | TASK [Set kolla_action_mariadb = upgrade if MariaDB is already running] ******** 2026-04-05 00:59:45.787470 | orchestrator | Sunday 05 April 2026 00:56:37 +0000 (0:00:02.862) 0:00:03.039 ********** 2026-04-05 00:59:45.787491 | orchestrator | skipping: [localhost] 2026-04-05 00:59:45.787511 | orchestrator | 2026-04-05 00:59:45.787532 | orchestrator | TASK [Set kolla_action_mariadb = kolla_action_ng] ****************************** 2026-04-05 00:59:45.787552 | orchestrator | Sunday 05 April 2026 00:56:37 +0000 (0:00:00.044) 0:00:03.083 ********** 2026-04-05 00:59:45.787569 | orchestrator | ok: [localhost] 2026-04-05 00:59:45.787722 | orchestrator | 2026-04-05 00:59:45.787745 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-05 00:59:45.787763 | orchestrator | 2026-04-05 00:59:45.787780 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-05 00:59:45.787796 | orchestrator | Sunday 05 April 2026 00:56:37 +0000 (0:00:00.205) 0:00:03.288 ********** 2026-04-05 00:59:45.787814 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:59:45.787830 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:59:45.787848 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:59:45.787863 | orchestrator | 2026-04-05 00:59:45.787880 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-05 00:59:45.787898 | orchestrator | Sunday 05 April 2026 00:56:37 +0000 (0:00:00.311) 0:00:03.600 ********** 2026-04-05 00:59:45.787916 | orchestrator | ok: [testbed-node-0] => (item=enable_mariadb_True) 2026-04-05 00:59:45.787936 | orchestrator | ok: [testbed-node-1] => (item=enable_mariadb_True) 2026-04-05 00:59:45.787956 | orchestrator | ok: [testbed-node-2] => (item=enable_mariadb_True) 2026-04-05 00:59:45.787974 | orchestrator | 2026-04-05 00:59:45.787993 | orchestrator | PLAY [Apply role mariadb] ****************************************************** 2026-04-05 00:59:45.788007 | orchestrator | 2026-04-05 00:59:45.788048 | orchestrator | TASK [mariadb : Group MariaDB hosts based on shards] *************************** 2026-04-05 00:59:45.788060 | orchestrator | Sunday 05 April 2026 00:56:38 +0000 (0:00:00.395) 0:00:03.995 ********** 2026-04-05 00:59:45.788071 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-04-05 00:59:45.788082 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-04-05 00:59:45.788122 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-04-05 00:59:45.788133 | orchestrator | 2026-04-05 00:59:45.788144 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-04-05 00:59:45.788155 | orchestrator | Sunday 05 April 2026 00:56:38 +0000 (0:00:00.386) 0:00:04.382 ********** 2026-04-05 00:59:45.788181 | orchestrator | included: /ansible/roles/mariadb/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 00:59:45.788193 | orchestrator | 2026-04-05 00:59:45.788204 | orchestrator | TASK [mariadb : Ensuring config directories exist] ***************************** 2026-04-05 00:59:45.788215 | orchestrator | Sunday 05 April 2026 00:56:39 +0000 (0:00:00.656) 0:00:05.039 ********** 2026-04-05 00:59:45.788253 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-04-05 00:59:45.788270 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-04-05 00:59:45.788308 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-04-05 00:59:45.788321 | orchestrator | 2026-04-05 00:59:45.788344 | orchestrator | TASK [mariadb : Ensuring database backup config directory exists] ************** 2026-04-05 00:59:45.788359 | orchestrator | Sunday 05 April 2026 00:56:42 +0000 (0:00:03.292) 0:00:08.332 ********** 2026-04-05 00:59:45.788372 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:59:45.788385 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:59:45.788398 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:59:45.788410 | orchestrator | 2026-04-05 00:59:45.788423 | orchestrator | TASK [mariadb : Copying over my.cnf for mariabackup] *************************** 2026-04-05 00:59:45.788437 | orchestrator | Sunday 05 April 2026 00:56:43 +0000 (0:00:00.655) 0:00:08.988 ********** 2026-04-05 00:59:45.788578 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:59:45.788592 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:59:45.788602 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:59:45.788613 | orchestrator | 2026-04-05 00:59:45.788624 | orchestrator | TASK [mariadb : Copying over config.json files for services] ******************* 2026-04-05 00:59:45.788634 | orchestrator | Sunday 05 April 2026 00:56:44 +0000 (0:00:01.666) 0:00:10.654 ********** 2026-04-05 00:59:45.788647 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-04-05 00:59:45.788685 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-04-05 00:59:45.788700 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-04-05 00:59:45.788719 | orchestrator | 2026-04-05 00:59:45.788730 | orchestrator | TASK [mariadb : Copying over config.json files for mariabackup] **************** 2026-04-05 00:59:45.788741 | orchestrator | Sunday 05 April 2026 00:56:49 +0000 (0:00:05.149) 0:00:15.804 ********** 2026-04-05 00:59:45.788752 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:59:45.788763 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:59:45.788773 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:59:45.788784 | orchestrator | 2026-04-05 00:59:45.788795 | orchestrator | TASK [mariadb : Copying over galera.cnf] *************************************** 2026-04-05 00:59:45.788805 | orchestrator | Sunday 05 April 2026 00:56:51 +0000 (0:00:01.324) 0:00:17.128 ********** 2026-04-05 00:59:45.788816 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:59:45.788827 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:59:45.788842 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:59:45.788853 | orchestrator | 2026-04-05 00:59:45.788864 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-04-05 00:59:45.788875 | orchestrator | Sunday 05 April 2026 00:56:55 +0000 (0:00:04.004) 0:00:21.133 ********** 2026-04-05 00:59:45.788886 | orchestrator | included: /ansible/roles/mariadb/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 00:59:45.788897 | orchestrator | 2026-04-05 00:59:45.788907 | orchestrator | TASK [service-cert-copy : mariadb | Copying over extra CA certificates] ******** 2026-04-05 00:59:45.788918 | orchestrator | Sunday 05 April 2026 00:56:56 +0000 (0:00:00.764) 0:00:21.898 ********** 2026-04-05 00:59:45.788939 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-05 00:59:45.788952 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:59:45.788976 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-05 00:59:45.788989 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:59:45.789008 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-05 00:59:45.789049 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:59:45.789060 | orchestrator | 2026-04-05 00:59:45.789071 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS certificate] *** 2026-04-05 00:59:45.789082 | orchestrator | Sunday 05 April 2026 00:56:59 +0000 (0:00:03.472) 0:00:25.371 ********** 2026-04-05 00:59:45.789101 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-05 00:59:45.789113 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:59:45.789131 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-05 00:59:45.789143 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:59:45.789235 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-05 00:59:45.789266 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:59:45.789279 | orchestrator | 2026-04-05 00:59:45.789291 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS key] ***** 2026-04-05 00:59:45.789305 | orchestrator | Sunday 05 April 2026 00:57:03 +0000 (0:00:03.789) 0:00:29.160 ********** 2026-04-05 00:59:45.789323 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-05 00:59:45.789347 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-05 00:59:45.789367 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:59:45.789378 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:59:45.789395 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-05 00:59:45.789407 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:59:45.789418 | orchestrator | 2026-04-05 00:59:45.789429 | orchestrator | TASK [mariadb : Check mariadb containers] ************************************** 2026-04-05 00:59:45.789552 | orchestrator | Sunday 05 April 2026 00:57:07 +0000 (0:00:04.040) 0:00:33.200 ********** 2026-04-05 00:59:45.789575 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-04-05 00:59:45.789607 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-04-05 00:59:45.789629 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-04-05 00:59:45.789649 | orchestrator | 2026-04-05 00:59:45.789661 | orchestrator | TASK [mariadb : Create MariaDB volume] ***************************************** 2026-04-05 00:59:45.789672 | orchestrator | Sunday 05 April 2026 00:57:11 +0000 (0:00:03.889) 0:00:37.089 ********** 2026-04-05 00:59:45.789683 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:59:45.789694 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:59:45.789704 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:59:45.789715 | orchestrator | 2026-04-05 00:59:45.789726 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB volume availability] ************* 2026-04-05 00:59:45.789737 | orchestrator | Sunday 05 April 2026 00:57:12 +0000 (0:00:00.842) 0:00:37.932 ********** 2026-04-05 00:59:45.789747 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:59:45.789759 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:59:45.789770 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:59:45.789780 | orchestrator | 2026-04-05 00:59:45.789791 | orchestrator | TASK [mariadb : Establish whether the cluster has already existed] ************* 2026-04-05 00:59:45.789802 | orchestrator | Sunday 05 April 2026 00:57:12 +0000 (0:00:00.405) 0:00:38.337 ********** 2026-04-05 00:59:45.789813 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:59:45.789824 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:59:45.789834 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:59:45.789845 | orchestrator | 2026-04-05 00:59:45.789856 | orchestrator | TASK [mariadb : Check MariaDB service port liveness] *************************** 2026-04-05 00:59:45.789867 | orchestrator | Sunday 05 April 2026 00:57:12 +0000 (0:00:00.438) 0:00:38.776 ********** 2026-04-05 00:59:45.789879 | orchestrator | fatal: [testbed-node-0]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.10:3306"} 2026-04-05 00:59:45.789890 | orchestrator | ...ignoring 2026-04-05 00:59:45.789901 | orchestrator | fatal: [testbed-node-1]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.11:3306"} 2026-04-05 00:59:45.789912 | orchestrator | ...ignoring 2026-04-05 00:59:45.789928 | orchestrator | fatal: [testbed-node-2]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.12:3306"} 2026-04-05 00:59:45.789939 | orchestrator | ...ignoring 2026-04-05 00:59:45.789950 | orchestrator | 2026-04-05 00:59:45.789961 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service port liveness] *********** 2026-04-05 00:59:45.789972 | orchestrator | Sunday 05 April 2026 00:57:24 +0000 (0:00:11.166) 0:00:49.943 ********** 2026-04-05 00:59:45.789983 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:59:45.789994 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:59:45.790004 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:59:45.790169 | orchestrator | 2026-04-05 00:59:45.790186 | orchestrator | TASK [mariadb : Fail on existing but stopped cluster] ************************** 2026-04-05 00:59:45.790200 | orchestrator | Sunday 05 April 2026 00:57:24 +0000 (0:00:00.482) 0:00:50.426 ********** 2026-04-05 00:59:45.790225 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:59:45.790239 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:59:45.790251 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:59:45.790264 | orchestrator | 2026-04-05 00:59:45.790276 | orchestrator | TASK [mariadb : Check MariaDB service WSREP sync status] *********************** 2026-04-05 00:59:45.790289 | orchestrator | Sunday 05 April 2026 00:57:25 +0000 (0:00:00.539) 0:00:50.965 ********** 2026-04-05 00:59:45.790302 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:59:45.790314 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:59:45.790326 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:59:45.790337 | orchestrator | 2026-04-05 00:59:45.790348 | orchestrator | TASK [mariadb : Extract MariaDB service WSREP sync status] ********************* 2026-04-05 00:59:45.790359 | orchestrator | Sunday 05 April 2026 00:57:25 +0000 (0:00:00.510) 0:00:51.475 ********** 2026-04-05 00:59:45.790371 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:59:45.790382 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:59:45.790392 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:59:45.790401 | orchestrator | 2026-04-05 00:59:45.790411 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service WSREP sync status] ******* 2026-04-05 00:59:45.790421 | orchestrator | Sunday 05 April 2026 00:57:26 +0000 (0:00:00.737) 0:00:52.213 ********** 2026-04-05 00:59:45.790430 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:59:45.790440 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:59:45.790450 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:59:45.790460 | orchestrator | 2026-04-05 00:59:45.790469 | orchestrator | TASK [mariadb : Fail when MariaDB services are not synced across the whole cluster] *** 2026-04-05 00:59:45.790480 | orchestrator | Sunday 05 April 2026 00:57:26 +0000 (0:00:00.424) 0:00:52.637 ********** 2026-04-05 00:59:45.790497 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:59:45.790507 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:59:45.790516 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:59:45.790526 | orchestrator | 2026-04-05 00:59:45.790536 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-04-05 00:59:45.790546 | orchestrator | Sunday 05 April 2026 00:57:27 +0000 (0:00:00.532) 0:00:53.171 ********** 2026-04-05 00:59:45.790556 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:59:45.790565 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:59:45.790575 | orchestrator | included: /ansible/roles/mariadb/tasks/bootstrap_cluster.yml for testbed-node-0 2026-04-05 00:59:45.790584 | orchestrator | 2026-04-05 00:59:45.790594 | orchestrator | TASK [mariadb : Running MariaDB bootstrap container] *************************** 2026-04-05 00:59:45.790606 | orchestrator | Sunday 05 April 2026 00:57:27 +0000 (0:00:00.482) 0:00:53.653 ********** 2026-04-05 00:59:45.790622 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:59:45.790640 | orchestrator | 2026-04-05 00:59:45.790664 | orchestrator | TASK [mariadb : Store bootstrap host name into facts] ************************** 2026-04-05 00:59:45.790686 | orchestrator | Sunday 05 April 2026 00:57:38 +0000 (0:00:10.847) 0:01:04.501 ********** 2026-04-05 00:59:45.790703 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:59:45.790720 | orchestrator | 2026-04-05 00:59:45.790737 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-04-05 00:59:45.790755 | orchestrator | Sunday 05 April 2026 00:57:38 +0000 (0:00:00.295) 0:01:04.796 ********** 2026-04-05 00:59:45.790769 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:59:45.790784 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:59:45.790798 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:59:45.790813 | orchestrator | 2026-04-05 00:59:45.790830 | orchestrator | RUNNING HANDLER [mariadb : Starting first MariaDB container] ******************* 2026-04-05 00:59:45.790848 | orchestrator | Sunday 05 April 2026 00:57:39 +0000 (0:00:00.876) 0:01:05.673 ********** 2026-04-05 00:59:45.790865 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:59:45.790882 | orchestrator | 2026-04-05 00:59:45.790900 | orchestrator | RUNNING HANDLER [mariadb : Wait for first MariaDB service port liveness] ******* 2026-04-05 00:59:45.790918 | orchestrator | Sunday 05 April 2026 00:57:48 +0000 (0:00:08.259) 0:01:13.932 ********** 2026-04-05 00:59:45.790948 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:59:45.790966 | orchestrator | 2026-04-05 00:59:45.790983 | orchestrator | RUNNING HANDLER [mariadb : Wait for first MariaDB service to sync WSREP] ******* 2026-04-05 00:59:45.790999 | orchestrator | Sunday 05 April 2026 00:57:50 +0000 (0:00:02.551) 0:01:16.484 ********** 2026-04-05 00:59:45.791009 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:59:45.791057 | orchestrator | 2026-04-05 00:59:45.791070 | orchestrator | RUNNING HANDLER [mariadb : Ensure MariaDB is running normally on bootstrap host] *** 2026-04-05 00:59:45.791080 | orchestrator | Sunday 05 April 2026 00:57:53 +0000 (0:00:02.809) 0:01:19.293 ********** 2026-04-05 00:59:45.791090 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:59:45.791099 | orchestrator | 2026-04-05 00:59:45.791109 | orchestrator | RUNNING HANDLER [mariadb : Restart MariaDB on existing cluster members] ******** 2026-04-05 00:59:45.791119 | orchestrator | Sunday 05 April 2026 00:57:53 +0000 (0:00:00.125) 0:01:19.419 ********** 2026-04-05 00:59:45.791128 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:59:45.791138 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:59:45.791147 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:59:45.791157 | orchestrator | 2026-04-05 00:59:45.791167 | orchestrator | RUNNING HANDLER [mariadb : Start MariaDB on new nodes] ************************* 2026-04-05 00:59:45.791176 | orchestrator | Sunday 05 April 2026 00:57:53 +0000 (0:00:00.313) 0:01:19.733 ********** 2026-04-05 00:59:45.791186 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:59:45.791195 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:59:45.791204 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:59:45.791223 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_restart 2026-04-05 00:59:45.791233 | orchestrator | 2026-04-05 00:59:45.791243 | orchestrator | PLAY [Restart mariadb services] ************************************************ 2026-04-05 00:59:45.791253 | orchestrator | skipping: no hosts matched 2026-04-05 00:59:45.791262 | orchestrator | 2026-04-05 00:59:45.791272 | orchestrator | PLAY [Start mariadb services] ************************************************** 2026-04-05 00:59:45.791281 | orchestrator | 2026-04-05 00:59:45.791291 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2026-04-05 00:59:45.791301 | orchestrator | Sunday 05 April 2026 00:57:54 +0000 (0:00:00.345) 0:01:20.078 ********** 2026-04-05 00:59:45.791310 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:59:45.791320 | orchestrator | 2026-04-05 00:59:45.791330 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2026-04-05 00:59:45.791339 | orchestrator | Sunday 05 April 2026 00:58:11 +0000 (0:00:16.820) 0:01:36.899 ********** 2026-04-05 00:59:45.791349 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:59:45.791358 | orchestrator | 2026-04-05 00:59:45.791368 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2026-04-05 00:59:45.791378 | orchestrator | Sunday 05 April 2026 00:58:27 +0000 (0:00:16.669) 0:01:53.568 ********** 2026-04-05 00:59:45.791387 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:59:45.791421 | orchestrator | 2026-04-05 00:59:45.791432 | orchestrator | PLAY [Start mariadb services] ************************************************** 2026-04-05 00:59:45.791441 | orchestrator | 2026-04-05 00:59:45.791451 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2026-04-05 00:59:45.791461 | orchestrator | Sunday 05 April 2026 00:58:30 +0000 (0:00:02.513) 0:01:56.081 ********** 2026-04-05 00:59:45.791470 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:59:45.791480 | orchestrator | 2026-04-05 00:59:45.791490 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2026-04-05 00:59:45.791499 | orchestrator | Sunday 05 April 2026 00:58:49 +0000 (0:00:18.873) 0:02:14.955 ********** 2026-04-05 00:59:45.791509 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:59:45.791519 | orchestrator | 2026-04-05 00:59:45.791529 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2026-04-05 00:59:45.791538 | orchestrator | Sunday 05 April 2026 00:59:05 +0000 (0:00:16.918) 0:02:31.873 ********** 2026-04-05 00:59:45.791548 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:59:45.791565 | orchestrator | 2026-04-05 00:59:45.791576 | orchestrator | PLAY [Restart bootstrap mariadb service] *************************************** 2026-04-05 00:59:45.791585 | orchestrator | 2026-04-05 00:59:45.791606 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2026-04-05 00:59:45.791617 | orchestrator | Sunday 05 April 2026 00:59:08 +0000 (0:00:02.411) 0:02:34.284 ********** 2026-04-05 00:59:45.791626 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:59:45.791636 | orchestrator | 2026-04-05 00:59:45.791646 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2026-04-05 00:59:45.791656 | orchestrator | Sunday 05 April 2026 00:59:25 +0000 (0:00:17.347) 0:02:51.632 ********** 2026-04-05 00:59:45.791665 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:59:45.791675 | orchestrator | 2026-04-05 00:59:45.791685 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2026-04-05 00:59:45.791695 | orchestrator | Sunday 05 April 2026 00:59:26 +0000 (0:00:00.556) 0:02:52.189 ********** 2026-04-05 00:59:45.791705 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:59:45.791714 | orchestrator | 2026-04-05 00:59:45.791725 | orchestrator | PLAY [Apply mariadb post-configuration] **************************************** 2026-04-05 00:59:45.791734 | orchestrator | 2026-04-05 00:59:45.791744 | orchestrator | TASK [Include mariadb post-deploy.yml] ***************************************** 2026-04-05 00:59:45.791753 | orchestrator | Sunday 05 April 2026 00:59:29 +0000 (0:00:02.734) 0:02:54.923 ********** 2026-04-05 00:59:45.791763 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 00:59:45.791773 | orchestrator | 2026-04-05 00:59:45.791782 | orchestrator | TASK [mariadb : Creating shard root mysql user] ******************************** 2026-04-05 00:59:45.791792 | orchestrator | Sunday 05 April 2026 00:59:29 +0000 (0:00:00.698) 0:02:55.622 ********** 2026-04-05 00:59:45.791802 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:59:45.791813 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:59:45.791822 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:59:45.791832 | orchestrator | 2026-04-05 00:59:45.791842 | orchestrator | TASK [mariadb : Creating mysql monitor user] *********************************** 2026-04-05 00:59:45.791852 | orchestrator | Sunday 05 April 2026 00:59:32 +0000 (0:00:02.640) 0:02:58.263 ********** 2026-04-05 00:59:45.791862 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:59:45.791871 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:59:45.791882 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:59:45.791891 | orchestrator | 2026-04-05 00:59:45.791901 | orchestrator | TASK [mariadb : Creating database backup user and setting permissions] ********* 2026-04-05 00:59:45.791911 | orchestrator | Sunday 05 April 2026 00:59:34 +0000 (0:00:02.465) 0:03:00.728 ********** 2026-04-05 00:59:45.791920 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:59:45.791930 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:59:45.791940 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:59:45.791950 | orchestrator | 2026-04-05 00:59:45.791960 | orchestrator | TASK [mariadb : Granting permissions on Mariabackup database to backup user] *** 2026-04-05 00:59:45.791969 | orchestrator | Sunday 05 April 2026 00:59:37 +0000 (0:00:02.528) 0:03:03.257 ********** 2026-04-05 00:59:45.791979 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:59:45.791988 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:59:45.791997 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:59:45.792007 | orchestrator | 2026-04-05 00:59:45.792041 | orchestrator | TASK [mariadb : Wait for MariaDB service to be ready through VIP] ************** 2026-04-05 00:59:45.792052 | orchestrator | Sunday 05 April 2026 00:59:40 +0000 (0:00:02.734) 0:03:05.991 ********** 2026-04-05 00:59:45.792061 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:59:45.792071 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:59:45.792081 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:59:45.792091 | orchestrator | 2026-04-05 00:59:45.792100 | orchestrator | TASK [Include mariadb post-upgrade.yml] **************************************** 2026-04-05 00:59:45.792111 | orchestrator | Sunday 05 April 2026 00:59:42 +0000 (0:00:02.790) 0:03:08.782 ********** 2026-04-05 00:59:45.792128 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:59:45.792137 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:59:45.792156 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:59:45.792166 | orchestrator | 2026-04-05 00:59:45.792175 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-05 00:59:45.792185 | orchestrator | localhost : ok=3  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=1  2026-04-05 00:59:45.792196 | orchestrator | testbed-node-0 : ok=34  changed=16  unreachable=0 failed=0 skipped=11  rescued=0 ignored=1  2026-04-05 00:59:45.792207 | orchestrator | testbed-node-1 : ok=20  changed=7  unreachable=0 failed=0 skipped=18  rescued=0 ignored=1  2026-04-05 00:59:45.792217 | orchestrator | testbed-node-2 : ok=20  changed=7  unreachable=0 failed=0 skipped=18  rescued=0 ignored=1  2026-04-05 00:59:45.792227 | orchestrator | 2026-04-05 00:59:45.792238 | orchestrator | 2026-04-05 00:59:45.792248 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-05 00:59:45.792257 | orchestrator | Sunday 05 April 2026 00:59:43 +0000 (0:00:00.234) 0:03:09.016 ********** 2026-04-05 00:59:45.792267 | orchestrator | =============================================================================== 2026-04-05 00:59:45.792276 | orchestrator | mariadb : Restart MariaDB container ------------------------------------ 35.69s 2026-04-05 00:59:45.792286 | orchestrator | mariadb : Wait for MariaDB service port liveness ----------------------- 33.59s 2026-04-05 00:59:45.792296 | orchestrator | mariadb : Restart MariaDB container ------------------------------------ 17.35s 2026-04-05 00:59:45.792306 | orchestrator | mariadb : Check MariaDB service port liveness -------------------------- 11.17s 2026-04-05 00:59:45.792315 | orchestrator | mariadb : Running MariaDB bootstrap container -------------------------- 10.85s 2026-04-05 00:59:45.792326 | orchestrator | mariadb : Starting first MariaDB container ------------------------------ 8.26s 2026-04-05 00:59:45.792343 | orchestrator | mariadb : Copying over config.json files for services ------------------- 5.15s 2026-04-05 00:59:45.792353 | orchestrator | mariadb : Wait for MariaDB service to sync WSREP ------------------------ 4.92s 2026-04-05 00:59:45.792363 | orchestrator | service-cert-copy : mariadb | Copying over backend internal TLS key ----- 4.04s 2026-04-05 00:59:45.792373 | orchestrator | mariadb : Copying over galera.cnf --------------------------------------- 4.00s 2026-04-05 00:59:45.792383 | orchestrator | mariadb : Check mariadb containers -------------------------------------- 3.89s 2026-04-05 00:59:45.792393 | orchestrator | service-cert-copy : mariadb | Copying over backend internal TLS certificate --- 3.79s 2026-04-05 00:59:45.792402 | orchestrator | service-cert-copy : mariadb | Copying over extra CA certificates -------- 3.47s 2026-04-05 00:59:45.792412 | orchestrator | mariadb : Ensuring config directories exist ----------------------------- 3.29s 2026-04-05 00:59:45.792422 | orchestrator | Check MariaDB service --------------------------------------------------- 2.86s 2026-04-05 00:59:45.792431 | orchestrator | mariadb : Wait for first MariaDB service to sync WSREP ------------------ 2.81s 2026-04-05 00:59:45.792441 | orchestrator | mariadb : Wait for MariaDB service to be ready through VIP -------------- 2.79s 2026-04-05 00:59:45.792451 | orchestrator | mariadb : Wait for MariaDB service to sync WSREP ------------------------ 2.73s 2026-04-05 00:59:45.792460 | orchestrator | mariadb : Granting permissions on Mariabackup database to backup user --- 2.73s 2026-04-05 00:59:45.792470 | orchestrator | mariadb : Creating shard root mysql user -------------------------------- 2.64s 2026-04-05 00:59:45.792480 | orchestrator | 2026-04-05 00:59:45 | INFO  | Task 96b6a315-d457-49fb-b111-e531fff1043c is in state STARTED 2026-04-05 00:59:45.792490 | orchestrator | 2026-04-05 00:59:45 | INFO  | Task 8f7fa720-c7cb-4f2e-ad10-7a0d57e3e6f1 is in state STARTED 2026-04-05 00:59:45.792500 | orchestrator | 2026-04-05 00:59:45 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:59:48.834633 | orchestrator | 2026-04-05 00:59:48 | INFO  | Task daeb651d-59f5-4d09-a822-e33f38a3d3e8 is in state STARTED 2026-04-05 00:59:48.837292 | orchestrator | 2026-04-05 00:59:48.837338 | orchestrator | 2026-04-05 00:59:48.837345 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-05 00:59:48.837350 | orchestrator | 2026-04-05 00:59:48.837355 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-05 00:59:48.837359 | orchestrator | Sunday 05 April 2026 00:56:34 +0000 (0:00:00.295) 0:00:00.295 ********** 2026-04-05 00:59:48.837363 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:59:48.837368 | orchestrator | ok: [testbed-node-1] 2026-04-05 00:59:48.837372 | orchestrator | ok: [testbed-node-2] 2026-04-05 00:59:48.837376 | orchestrator | 2026-04-05 00:59:48.837380 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-05 00:59:48.837384 | orchestrator | Sunday 05 April 2026 00:56:34 +0000 (0:00:00.300) 0:00:00.595 ********** 2026-04-05 00:59:48.837388 | orchestrator | ok: [testbed-node-0] => (item=enable_opensearch_True) 2026-04-05 00:59:48.837393 | orchestrator | ok: [testbed-node-1] => (item=enable_opensearch_True) 2026-04-05 00:59:48.837397 | orchestrator | ok: [testbed-node-2] => (item=enable_opensearch_True) 2026-04-05 00:59:48.837401 | orchestrator | 2026-04-05 00:59:48.837404 | orchestrator | PLAY [Apply role opensearch] *************************************************** 2026-04-05 00:59:48.837408 | orchestrator | 2026-04-05 00:59:48.837412 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-04-05 00:59:48.837415 | orchestrator | Sunday 05 April 2026 00:56:34 +0000 (0:00:00.285) 0:00:00.881 ********** 2026-04-05 00:59:48.837431 | orchestrator | included: /ansible/roles/opensearch/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 00:59:48.837435 | orchestrator | 2026-04-05 00:59:48.837439 | orchestrator | TASK [opensearch : Setting sysctl values] ************************************** 2026-04-05 00:59:48.837443 | orchestrator | Sunday 05 April 2026 00:56:35 +0000 (0:00:00.509) 0:00:01.391 ********** 2026-04-05 00:59:48.837447 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-04-05 00:59:48.837451 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-04-05 00:59:48.837454 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-04-05 00:59:48.837458 | orchestrator | 2026-04-05 00:59:48.837462 | orchestrator | TASK [opensearch : Ensuring config directories exist] ************************** 2026-04-05 00:59:48.837466 | orchestrator | Sunday 05 April 2026 00:56:36 +0000 (0:00:01.017) 0:00:02.408 ********** 2026-04-05 00:59:48.837471 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-04-05 00:59:48.837480 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-04-05 00:59:48.837517 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-04-05 00:59:48.837530 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-04-05 00:59:48.837538 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-04-05 00:59:48.837545 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-04-05 00:59:48.837557 | orchestrator | 2026-04-05 00:59:48.837564 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-04-05 00:59:48.837569 | orchestrator | Sunday 05 April 2026 00:56:37 +0000 (0:00:01.145) 0:00:03.554 ********** 2026-04-05 00:59:48.837572 | orchestrator | included: /ansible/roles/opensearch/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 00:59:48.837576 | orchestrator | 2026-04-05 00:59:48.837580 | orchestrator | TASK [service-cert-copy : opensearch | Copying over extra CA certificates] ***** 2026-04-05 00:59:48.837584 | orchestrator | Sunday 05 April 2026 00:56:38 +0000 (0:00:00.638) 0:00:04.192 ********** 2026-04-05 00:59:48.837592 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-04-05 00:59:48.837600 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-04-05 00:59:48.837604 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-04-05 00:59:48.837608 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-04-05 00:59:48.837618 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-04-05 00:59:48.837625 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-04-05 00:59:48.837754 | orchestrator | 2026-04-05 00:59:48.837759 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS certificate] *** 2026-04-05 00:59:48.837763 | orchestrator | Sunday 05 April 2026 00:56:40 +0000 (0:00:02.618) 0:00:06.810 ********** 2026-04-05 00:59:48.837768 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-04-05 00:59:48.837772 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-04-05 00:59:48.837781 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:59:48.837785 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-04-05 00:59:48.837796 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-04-05 00:59:48.837801 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:59:48.837805 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-04-05 00:59:48.837809 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-04-05 00:59:48.837817 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:59:48.837820 | orchestrator | 2026-04-05 00:59:48.837825 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS key] *** 2026-04-05 00:59:48.837828 | orchestrator | Sunday 05 April 2026 00:56:41 +0000 (0:00:01.119) 0:00:07.930 ********** 2026-04-05 00:59:48.837832 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-04-05 00:59:48.837840 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-04-05 00:59:48.837844 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:59:48.837851 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-04-05 00:59:48.837855 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-04-05 00:59:48.837862 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:59:48.837866 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-04-05 00:59:48.837876 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-04-05 00:59:48.837883 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:59:48.837889 | orchestrator | 2026-04-05 00:59:48.837894 | orchestrator | TASK [opensearch : Copying over config.json files for services] **************** 2026-04-05 00:59:48.837900 | orchestrator | Sunday 05 April 2026 00:56:42 +0000 (0:00:01.024) 0:00:08.954 ********** 2026-04-05 00:59:48.837910 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-04-05 00:59:48.837920 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-04-05 00:59:48.837927 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-04-05 00:59:48.837938 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-04-05 00:59:48.837953 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-04-05 00:59:48.837958 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-04-05 00:59:48.837966 | orchestrator | 2026-04-05 00:59:48.837970 | orchestrator | TASK [opensearch : Copying over opensearch service config file] **************** 2026-04-05 00:59:48.837974 | orchestrator | Sunday 05 April 2026 00:56:45 +0000 (0:00:02.741) 0:00:11.696 ********** 2026-04-05 00:59:48.837978 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:59:48.837981 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:59:48.837985 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:59:48.837989 | orchestrator | 2026-04-05 00:59:48.837993 | orchestrator | TASK [opensearch : Copying over opensearch-dashboards config file] ************* 2026-04-05 00:59:48.837996 | orchestrator | Sunday 05 April 2026 00:56:49 +0000 (0:00:03.482) 0:00:15.178 ********** 2026-04-05 00:59:48.838000 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:59:48.838004 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:59:48.838056 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:59:48.838072 | orchestrator | 2026-04-05 00:59:48.838076 | orchestrator | TASK [opensearch : Check opensearch containers] ******************************** 2026-04-05 00:59:48.838080 | orchestrator | Sunday 05 April 2026 00:56:50 +0000 (0:00:01.559) 0:00:16.738 ********** 2026-04-05 00:59:48.838084 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-04-05 00:59:48.838093 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-04-05 00:59:48.838100 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-04-05 00:59:48.838109 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-04-05 00:59:48.838113 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-04-05 00:59:48.838121 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-04-05 00:59:48.838125 | orchestrator | 2026-04-05 00:59:48.838129 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-04-05 00:59:48.838133 | orchestrator | Sunday 05 April 2026 00:56:52 +0000 (0:00:02.063) 0:00:18.802 ********** 2026-04-05 00:59:48.838137 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:59:48.838144 | orchestrator | skipping: [testbed-node-1] 2026-04-05 00:59:48.838147 | orchestrator | skipping: [testbed-node-2] 2026-04-05 00:59:48.838151 | orchestrator | 2026-04-05 00:59:48.838157 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2026-04-05 00:59:48.838161 | orchestrator | Sunday 05 April 2026 00:56:53 +0000 (0:00:00.502) 0:00:19.304 ********** 2026-04-05 00:59:48.838165 | orchestrator | 2026-04-05 00:59:48.838169 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2026-04-05 00:59:48.838173 | orchestrator | Sunday 05 April 2026 00:56:53 +0000 (0:00:00.068) 0:00:19.373 ********** 2026-04-05 00:59:48.838177 | orchestrator | 2026-04-05 00:59:48.838180 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2026-04-05 00:59:48.838184 | orchestrator | Sunday 05 April 2026 00:56:53 +0000 (0:00:00.073) 0:00:19.446 ********** 2026-04-05 00:59:48.838188 | orchestrator | 2026-04-05 00:59:48.838192 | orchestrator | RUNNING HANDLER [opensearch : Disable shard allocation] ************************ 2026-04-05 00:59:48.838195 | orchestrator | Sunday 05 April 2026 00:56:53 +0000 (0:00:00.069) 0:00:19.516 ********** 2026-04-05 00:59:48.838199 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:59:48.838203 | orchestrator | 2026-04-05 00:59:48.838207 | orchestrator | RUNNING HANDLER [opensearch : Perform a flush] ********************************* 2026-04-05 00:59:48.838210 | orchestrator | Sunday 05 April 2026 00:56:53 +0000 (0:00:00.208) 0:00:19.725 ********** 2026-04-05 00:59:48.838214 | orchestrator | skipping: [testbed-node-0] 2026-04-05 00:59:48.838218 | orchestrator | 2026-04-05 00:59:48.838222 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch container] ******************** 2026-04-05 00:59:48.838225 | orchestrator | Sunday 05 April 2026 00:56:53 +0000 (0:00:00.204) 0:00:19.929 ********** 2026-04-05 00:59:48.838229 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:59:48.838233 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:59:48.838237 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:59:48.838241 | orchestrator | 2026-04-05 00:59:48.838244 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch-dashboards container] ********* 2026-04-05 00:59:48.838248 | orchestrator | Sunday 05 April 2026 00:58:06 +0000 (0:01:12.782) 0:01:32.711 ********** 2026-04-05 00:59:48.838252 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:59:48.838256 | orchestrator | changed: [testbed-node-1] 2026-04-05 00:59:48.838259 | orchestrator | changed: [testbed-node-2] 2026-04-05 00:59:48.838263 | orchestrator | 2026-04-05 00:59:48.838267 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-04-05 00:59:48.838271 | orchestrator | Sunday 05 April 2026 00:59:32 +0000 (0:01:25.366) 0:02:58.078 ********** 2026-04-05 00:59:48.838274 | orchestrator | included: /ansible/roles/opensearch/tasks/post-config.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 00:59:48.838278 | orchestrator | 2026-04-05 00:59:48.838282 | orchestrator | TASK [opensearch : Wait for OpenSearch to become ready] ************************ 2026-04-05 00:59:48.838286 | orchestrator | Sunday 05 April 2026 00:59:32 +0000 (0:00:00.699) 0:02:58.777 ********** 2026-04-05 00:59:48.838290 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:59:48.838294 | orchestrator | 2026-04-05 00:59:48.838298 | orchestrator | TASK [opensearch : Wait for OpenSearch cluster to become healthy] ************** 2026-04-05 00:59:48.838301 | orchestrator | Sunday 05 April 2026 00:59:35 +0000 (0:00:02.591) 0:03:01.369 ********** 2026-04-05 00:59:48.838305 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:59:48.838309 | orchestrator | 2026-04-05 00:59:48.838313 | orchestrator | TASK [opensearch : Check if a log retention policy exists] ********************* 2026-04-05 00:59:48.838316 | orchestrator | Sunday 05 April 2026 00:59:37 +0000 (0:00:02.506) 0:03:03.876 ********** 2026-04-05 00:59:48.838320 | orchestrator | ok: [testbed-node-0] 2026-04-05 00:59:48.838324 | orchestrator | 2026-04-05 00:59:48.838328 | orchestrator | TASK [opensearch : Create new log retention policy] **************************** 2026-04-05 00:59:48.838331 | orchestrator | Sunday 05 April 2026 00:59:40 +0000 (0:00:02.541) 0:03:06.417 ********** 2026-04-05 00:59:48.838335 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:59:48.838342 | orchestrator | 2026-04-05 00:59:48.838346 | orchestrator | TASK [opensearch : Apply retention policy to existing indices] ***************** 2026-04-05 00:59:48.838349 | orchestrator | Sunday 05 April 2026 00:59:43 +0000 (0:00:03.041) 0:03:09.459 ********** 2026-04-05 00:59:48.838353 | orchestrator | changed: [testbed-node-0] 2026-04-05 00:59:48.838357 | orchestrator | 2026-04-05 00:59:48.838361 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-05 00:59:48.838366 | orchestrator | testbed-node-0 : ok=19  changed=11  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-04-05 00:59:48.838371 | orchestrator | testbed-node-1 : ok=14  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-04-05 00:59:48.838378 | orchestrator | testbed-node-2 : ok=14  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-04-05 00:59:48.838382 | orchestrator | 2026-04-05 00:59:48.838387 | orchestrator | 2026-04-05 00:59:48.838391 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-05 00:59:48.838396 | orchestrator | Sunday 05 April 2026 00:59:46 +0000 (0:00:02.840) 0:03:12.300 ********** 2026-04-05 00:59:48.838400 | orchestrator | =============================================================================== 2026-04-05 00:59:48.838404 | orchestrator | opensearch : Restart opensearch-dashboards container ------------------- 85.37s 2026-04-05 00:59:48.838409 | orchestrator | opensearch : Restart opensearch container ------------------------------ 72.78s 2026-04-05 00:59:48.838413 | orchestrator | opensearch : Copying over opensearch service config file ---------------- 3.48s 2026-04-05 00:59:48.838417 | orchestrator | opensearch : Create new log retention policy ---------------------------- 3.04s 2026-04-05 00:59:48.838422 | orchestrator | opensearch : Apply retention policy to existing indices ----------------- 2.84s 2026-04-05 00:59:48.838427 | orchestrator | opensearch : Copying over config.json files for services ---------------- 2.74s 2026-04-05 00:59:48.838431 | orchestrator | service-cert-copy : opensearch | Copying over extra CA certificates ----- 2.62s 2026-04-05 00:59:48.838436 | orchestrator | opensearch : Wait for OpenSearch to become ready ------------------------ 2.59s 2026-04-05 00:59:48.838443 | orchestrator | opensearch : Check if a log retention policy exists --------------------- 2.54s 2026-04-05 00:59:48.838447 | orchestrator | opensearch : Wait for OpenSearch cluster to become healthy -------------- 2.51s 2026-04-05 00:59:48.838452 | orchestrator | opensearch : Check opensearch containers -------------------------------- 2.06s 2026-04-05 00:59:48.838456 | orchestrator | opensearch : Copying over opensearch-dashboards config file ------------- 1.56s 2026-04-05 00:59:48.838461 | orchestrator | opensearch : Ensuring config directories exist -------------------------- 1.15s 2026-04-05 00:59:48.838465 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS certificate --- 1.12s 2026-04-05 00:59:48.838470 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS key --- 1.02s 2026-04-05 00:59:48.838474 | orchestrator | opensearch : Setting sysctl values -------------------------------------- 1.02s 2026-04-05 00:59:48.838478 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.70s 2026-04-05 00:59:48.838483 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.64s 2026-04-05 00:59:48.838487 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.51s 2026-04-05 00:59:48.838492 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.50s 2026-04-05 00:59:48.838497 | orchestrator | 2026-04-05 00:59:48 | INFO  | Task b9f9fe4a-095a-46c2-a3d1-d49948e77915 is in state SUCCESS 2026-04-05 00:59:48.838628 | orchestrator | 2026-04-05 00:59:48 | INFO  | Task 96b6a315-d457-49fb-b111-e531fff1043c is in state STARTED 2026-04-05 00:59:48.838637 | orchestrator | 2026-04-05 00:59:48 | INFO  | Task 8f7fa720-c7cb-4f2e-ad10-7a0d57e3e6f1 is in state STARTED 2026-04-05 00:59:48.838641 | orchestrator | 2026-04-05 00:59:48 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:59:51.886257 | orchestrator | 2026-04-05 00:59:51 | INFO  | Task daeb651d-59f5-4d09-a822-e33f38a3d3e8 is in state STARTED 2026-04-05 00:59:51.888500 | orchestrator | 2026-04-05 00:59:51 | INFO  | Task 96b6a315-d457-49fb-b111-e531fff1043c is in state STARTED 2026-04-05 00:59:51.892517 | orchestrator | 2026-04-05 00:59:51 | INFO  | Task 8f7fa720-c7cb-4f2e-ad10-7a0d57e3e6f1 is in state STARTED 2026-04-05 00:59:51.895616 | orchestrator | 2026-04-05 00:59:51 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:59:54.941324 | orchestrator | 2026-04-05 00:59:54 | INFO  | Task daeb651d-59f5-4d09-a822-e33f38a3d3e8 is in state STARTED 2026-04-05 00:59:54.943309 | orchestrator | 2026-04-05 00:59:54 | INFO  | Task 96b6a315-d457-49fb-b111-e531fff1043c is in state STARTED 2026-04-05 00:59:54.947158 | orchestrator | 2026-04-05 00:59:54 | INFO  | Task 8f7fa720-c7cb-4f2e-ad10-7a0d57e3e6f1 is in state STARTED 2026-04-05 00:59:54.947220 | orchestrator | 2026-04-05 00:59:54 | INFO  | Wait 1 second(s) until the next check 2026-04-05 00:59:57.987436 | orchestrator | 2026-04-05 00:59:57 | INFO  | Task daeb651d-59f5-4d09-a822-e33f38a3d3e8 is in state STARTED 2026-04-05 00:59:57.989745 | orchestrator | 2026-04-05 00:59:57 | INFO  | Task 96b6a315-d457-49fb-b111-e531fff1043c is in state STARTED 2026-04-05 00:59:57.991921 | orchestrator | 2026-04-05 00:59:57 | INFO  | Task 8f7fa720-c7cb-4f2e-ad10-7a0d57e3e6f1 is in state STARTED 2026-04-05 00:59:57.992128 | orchestrator | 2026-04-05 00:59:57 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:00:01.042160 | orchestrator | 2026-04-05 01:00:01 | INFO  | Task daeb651d-59f5-4d09-a822-e33f38a3d3e8 is in state STARTED 2026-04-05 01:00:01.042741 | orchestrator | 2026-04-05 01:00:01 | INFO  | Task 96b6a315-d457-49fb-b111-e531fff1043c is in state STARTED 2026-04-05 01:00:01.043623 | orchestrator | 2026-04-05 01:00:01 | INFO  | Task 8f7fa720-c7cb-4f2e-ad10-7a0d57e3e6f1 is in state STARTED 2026-04-05 01:00:01.043776 | orchestrator | 2026-04-05 01:00:01 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:00:04.088562 | orchestrator | 2026-04-05 01:00:04 | INFO  | Task daeb651d-59f5-4d09-a822-e33f38a3d3e8 is in state STARTED 2026-04-05 01:00:04.090736 | orchestrator | 2026-04-05 01:00:04 | INFO  | Task 96b6a315-d457-49fb-b111-e531fff1043c is in state STARTED 2026-04-05 01:00:04.091939 | orchestrator | 2026-04-05 01:00:04 | INFO  | Task 8f7fa720-c7cb-4f2e-ad10-7a0d57e3e6f1 is in state STARTED 2026-04-05 01:00:04.091971 | orchestrator | 2026-04-05 01:00:04 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:00:07.131316 | orchestrator | 2026-04-05 01:00:07 | INFO  | Task daeb651d-59f5-4d09-a822-e33f38a3d3e8 is in state STARTED 2026-04-05 01:00:07.132425 | orchestrator | 2026-04-05 01:00:07 | INFO  | Task 96b6a315-d457-49fb-b111-e531fff1043c is in state STARTED 2026-04-05 01:00:07.134337 | orchestrator | 2026-04-05 01:00:07 | INFO  | Task 8f7fa720-c7cb-4f2e-ad10-7a0d57e3e6f1 is in state STARTED 2026-04-05 01:00:07.134424 | orchestrator | 2026-04-05 01:00:07 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:00:10.170264 | orchestrator | 2026-04-05 01:00:10 | INFO  | Task daeb651d-59f5-4d09-a822-e33f38a3d3e8 is in state STARTED 2026-04-05 01:00:10.171848 | orchestrator | 2026-04-05 01:00:10 | INFO  | Task 96b6a315-d457-49fb-b111-e531fff1043c is in state STARTED 2026-04-05 01:00:10.175256 | orchestrator | 2026-04-05 01:00:10 | INFO  | Task 8f7fa720-c7cb-4f2e-ad10-7a0d57e3e6f1 is in state STARTED 2026-04-05 01:00:10.175888 | orchestrator | 2026-04-05 01:00:10 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:00:13.214444 | orchestrator | 2026-04-05 01:00:13 | INFO  | Task daeb651d-59f5-4d09-a822-e33f38a3d3e8 is in state STARTED 2026-04-05 01:00:13.215638 | orchestrator | 2026-04-05 01:00:13 | INFO  | Task 96b6a315-d457-49fb-b111-e531fff1043c is in state SUCCESS 2026-04-05 01:00:13.221312 | orchestrator | 2026-04-05 01:00:13.221745 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-04-05 01:00:13.221762 | orchestrator | 2.16.14 2026-04-05 01:00:13.221775 | orchestrator | 2026-04-05 01:00:13.221787 | orchestrator | PLAY [Create ceph pools] ******************************************************* 2026-04-05 01:00:13.221798 | orchestrator | 2026-04-05 01:00:13.221809 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-04-05 01:00:13.221819 | orchestrator | Sunday 05 April 2026 00:58:14 +0000 (0:00:00.605) 0:00:00.605 ********** 2026-04-05 01:00:13.221830 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-05 01:00:13.221842 | orchestrator | 2026-04-05 01:00:13.221852 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-04-05 01:00:13.221863 | orchestrator | Sunday 05 April 2026 00:58:14 +0000 (0:00:00.642) 0:00:01.248 ********** 2026-04-05 01:00:13.221874 | orchestrator | ok: [testbed-node-4] 2026-04-05 01:00:13.221884 | orchestrator | ok: [testbed-node-3] 2026-04-05 01:00:13.221895 | orchestrator | ok: [testbed-node-5] 2026-04-05 01:00:13.221906 | orchestrator | 2026-04-05 01:00:13.221917 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-04-05 01:00:13.221927 | orchestrator | Sunday 05 April 2026 00:58:15 +0000 (0:00:01.095) 0:00:02.344 ********** 2026-04-05 01:00:13.221938 | orchestrator | ok: [testbed-node-3] 2026-04-05 01:00:13.221948 | orchestrator | ok: [testbed-node-4] 2026-04-05 01:00:13.221959 | orchestrator | ok: [testbed-node-5] 2026-04-05 01:00:13.221969 | orchestrator | 2026-04-05 01:00:13.221980 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-04-05 01:00:13.222125 | orchestrator | Sunday 05 April 2026 00:58:16 +0000 (0:00:00.322) 0:00:02.666 ********** 2026-04-05 01:00:13.222141 | orchestrator | ok: [testbed-node-3] 2026-04-05 01:00:13.222151 | orchestrator | ok: [testbed-node-4] 2026-04-05 01:00:13.222162 | orchestrator | ok: [testbed-node-5] 2026-04-05 01:00:13.222173 | orchestrator | 2026-04-05 01:00:13.222184 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-04-05 01:00:13.222194 | orchestrator | Sunday 05 April 2026 00:58:17 +0000 (0:00:00.804) 0:00:03.471 ********** 2026-04-05 01:00:13.222205 | orchestrator | ok: [testbed-node-3] 2026-04-05 01:00:13.222215 | orchestrator | ok: [testbed-node-4] 2026-04-05 01:00:13.222226 | orchestrator | ok: [testbed-node-5] 2026-04-05 01:00:13.222236 | orchestrator | 2026-04-05 01:00:13.222247 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-04-05 01:00:13.222258 | orchestrator | Sunday 05 April 2026 00:58:17 +0000 (0:00:00.291) 0:00:03.763 ********** 2026-04-05 01:00:13.222268 | orchestrator | ok: [testbed-node-3] 2026-04-05 01:00:13.222279 | orchestrator | ok: [testbed-node-4] 2026-04-05 01:00:13.222291 | orchestrator | ok: [testbed-node-5] 2026-04-05 01:00:13.222304 | orchestrator | 2026-04-05 01:00:13.222317 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-04-05 01:00:13.222331 | orchestrator | Sunday 05 April 2026 00:58:17 +0000 (0:00:00.321) 0:00:04.084 ********** 2026-04-05 01:00:13.222344 | orchestrator | ok: [testbed-node-3] 2026-04-05 01:00:13.222357 | orchestrator | ok: [testbed-node-4] 2026-04-05 01:00:13.222370 | orchestrator | ok: [testbed-node-5] 2026-04-05 01:00:13.222382 | orchestrator | 2026-04-05 01:00:13.222393 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-04-05 01:00:13.222404 | orchestrator | Sunday 05 April 2026 00:58:17 +0000 (0:00:00.287) 0:00:04.371 ********** 2026-04-05 01:00:13.222415 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:00:13.222427 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:00:13.222437 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:00:13.222468 | orchestrator | 2026-04-05 01:00:13.222480 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-04-05 01:00:13.222491 | orchestrator | Sunday 05 April 2026 00:58:18 +0000 (0:00:00.539) 0:00:04.911 ********** 2026-04-05 01:00:13.222502 | orchestrator | ok: [testbed-node-3] 2026-04-05 01:00:13.222513 | orchestrator | ok: [testbed-node-4] 2026-04-05 01:00:13.222523 | orchestrator | ok: [testbed-node-5] 2026-04-05 01:00:13.222534 | orchestrator | 2026-04-05 01:00:13.222545 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-04-05 01:00:13.222555 | orchestrator | Sunday 05 April 2026 00:58:18 +0000 (0:00:00.311) 0:00:05.222 ********** 2026-04-05 01:00:13.222566 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-05 01:00:13.222577 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-05 01:00:13.222588 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-05 01:00:13.222599 | orchestrator | 2026-04-05 01:00:13.222609 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-04-05 01:00:13.222620 | orchestrator | Sunday 05 April 2026 00:58:19 +0000 (0:00:00.830) 0:00:06.053 ********** 2026-04-05 01:00:13.222631 | orchestrator | ok: [testbed-node-3] 2026-04-05 01:00:13.222641 | orchestrator | ok: [testbed-node-4] 2026-04-05 01:00:13.222652 | orchestrator | ok: [testbed-node-5] 2026-04-05 01:00:13.222663 | orchestrator | 2026-04-05 01:00:13.222674 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-04-05 01:00:13.222696 | orchestrator | Sunday 05 April 2026 00:58:20 +0000 (0:00:00.470) 0:00:06.523 ********** 2026-04-05 01:00:13.222707 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-05 01:00:13.222718 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-05 01:00:13.222729 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-05 01:00:13.222740 | orchestrator | 2026-04-05 01:00:13.222751 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-04-05 01:00:13.222762 | orchestrator | Sunday 05 April 2026 00:58:23 +0000 (0:00:03.317) 0:00:09.841 ********** 2026-04-05 01:00:13.222772 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-04-05 01:00:13.222783 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-04-05 01:00:13.222795 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-04-05 01:00:13.222805 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:00:13.222816 | orchestrator | 2026-04-05 01:00:13.222877 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-04-05 01:00:13.222891 | orchestrator | Sunday 05 April 2026 00:58:23 +0000 (0:00:00.428) 0:00:10.269 ********** 2026-04-05 01:00:13.222904 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-04-05 01:00:13.222917 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-04-05 01:00:13.222928 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-04-05 01:00:13.222939 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:00:13.222950 | orchestrator | 2026-04-05 01:00:13.222961 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-04-05 01:00:13.222971 | orchestrator | Sunday 05 April 2026 00:58:24 +0000 (0:00:00.853) 0:00:11.123 ********** 2026-04-05 01:00:13.223006 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-04-05 01:00:13.223030 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-04-05 01:00:13.223042 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-04-05 01:00:13.223053 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:00:13.223064 | orchestrator | 2026-04-05 01:00:13.223075 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-04-05 01:00:13.223086 | orchestrator | Sunday 05 April 2026 00:58:24 +0000 (0:00:00.191) 0:00:11.315 ********** 2026-04-05 01:00:13.223099 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': 'e5ff8ec9616d', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-04-05 00:58:21.223787', 'end': '2026-04-05 00:58:21.270205', 'delta': '0:00:00.046418', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['e5ff8ec9616d'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-04-05 01:00:13.223119 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': 'e49deade5e91', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-04-05 00:58:22.348791', 'end': '2026-04-05 00:58:22.378468', 'delta': '0:00:00.029677', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['e49deade5e91'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-04-05 01:00:13.223168 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': '80df088073d2', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-04-05 00:58:23.248646', 'end': '2026-04-05 00:58:23.297937', 'delta': '0:00:00.049291', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['80df088073d2'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-04-05 01:00:13.223205 | orchestrator | 2026-04-05 01:00:13.223218 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-04-05 01:00:13.223236 | orchestrator | Sunday 05 April 2026 00:58:25 +0000 (0:00:00.409) 0:00:11.724 ********** 2026-04-05 01:00:13.223247 | orchestrator | ok: [testbed-node-3] 2026-04-05 01:00:13.223258 | orchestrator | ok: [testbed-node-4] 2026-04-05 01:00:13.223269 | orchestrator | ok: [testbed-node-5] 2026-04-05 01:00:13.223279 | orchestrator | 2026-04-05 01:00:13.223290 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-04-05 01:00:13.223301 | orchestrator | Sunday 05 April 2026 00:58:25 +0000 (0:00:00.463) 0:00:12.188 ********** 2026-04-05 01:00:13.223312 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] 2026-04-05 01:00:13.223323 | orchestrator | 2026-04-05 01:00:13.223334 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-04-05 01:00:13.223345 | orchestrator | Sunday 05 April 2026 00:58:27 +0000 (0:00:01.304) 0:00:13.492 ********** 2026-04-05 01:00:13.223355 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:00:13.223366 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:00:13.223377 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:00:13.223387 | orchestrator | 2026-04-05 01:00:13.223398 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-04-05 01:00:13.223409 | orchestrator | Sunday 05 April 2026 00:58:27 +0000 (0:00:00.326) 0:00:13.819 ********** 2026-04-05 01:00:13.223420 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:00:13.223431 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:00:13.223441 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:00:13.223452 | orchestrator | 2026-04-05 01:00:13.223463 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-04-05 01:00:13.223473 | orchestrator | Sunday 05 April 2026 00:58:27 +0000 (0:00:00.417) 0:00:14.236 ********** 2026-04-05 01:00:13.223485 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:00:13.223495 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:00:13.223506 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:00:13.223516 | orchestrator | 2026-04-05 01:00:13.223527 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-04-05 01:00:13.223538 | orchestrator | Sunday 05 April 2026 00:58:28 +0000 (0:00:00.499) 0:00:14.735 ********** 2026-04-05 01:00:13.223549 | orchestrator | ok: [testbed-node-3] 2026-04-05 01:00:13.223559 | orchestrator | 2026-04-05 01:00:13.223570 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-04-05 01:00:13.223580 | orchestrator | Sunday 05 April 2026 00:58:28 +0000 (0:00:00.149) 0:00:14.885 ********** 2026-04-05 01:00:13.223591 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:00:13.223602 | orchestrator | 2026-04-05 01:00:13.223613 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-04-05 01:00:13.223623 | orchestrator | Sunday 05 April 2026 00:58:28 +0000 (0:00:00.208) 0:00:15.093 ********** 2026-04-05 01:00:13.223634 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:00:13.223644 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:00:13.223655 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:00:13.223665 | orchestrator | 2026-04-05 01:00:13.223676 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-04-05 01:00:13.223687 | orchestrator | Sunday 05 April 2026 00:58:29 +0000 (0:00:00.310) 0:00:15.404 ********** 2026-04-05 01:00:13.223697 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:00:13.223708 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:00:13.223718 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:00:13.223729 | orchestrator | 2026-04-05 01:00:13.223739 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-04-05 01:00:13.223750 | orchestrator | Sunday 05 April 2026 00:58:29 +0000 (0:00:00.336) 0:00:15.740 ********** 2026-04-05 01:00:13.223761 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:00:13.223771 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:00:13.223782 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:00:13.223792 | orchestrator | 2026-04-05 01:00:13.223808 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-04-05 01:00:13.223825 | orchestrator | Sunday 05 April 2026 00:58:29 +0000 (0:00:00.519) 0:00:16.260 ********** 2026-04-05 01:00:13.223836 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:00:13.223847 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:00:13.223857 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:00:13.223868 | orchestrator | 2026-04-05 01:00:13.223878 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-04-05 01:00:13.223889 | orchestrator | Sunday 05 April 2026 00:58:30 +0000 (0:00:00.345) 0:00:16.605 ********** 2026-04-05 01:00:13.223900 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:00:13.223910 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:00:13.223921 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:00:13.223931 | orchestrator | 2026-04-05 01:00:13.223942 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-04-05 01:00:13.223953 | orchestrator | Sunday 05 April 2026 00:58:30 +0000 (0:00:00.353) 0:00:16.959 ********** 2026-04-05 01:00:13.223964 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:00:13.223974 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:00:13.224004 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:00:13.224051 | orchestrator | 2026-04-05 01:00:13.224064 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-04-05 01:00:13.224075 | orchestrator | Sunday 05 April 2026 00:58:30 +0000 (0:00:00.322) 0:00:17.281 ********** 2026-04-05 01:00:13.224086 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:00:13.224096 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:00:13.224107 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:00:13.224117 | orchestrator | 2026-04-05 01:00:13.224128 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-04-05 01:00:13.224139 | orchestrator | Sunday 05 April 2026 00:58:31 +0000 (0:00:00.559) 0:00:17.841 ********** 2026-04-05 01:00:13.224150 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--9657aa76--f30a--575f--81fa--dc230eadde03-osd--block--9657aa76--f30a--575f--81fa--dc230eadde03', 'dm-uuid-LVM-8E1xuPLEYx1uTwydDUNwPMLUgzpgnl2IeAYMIzK2AO6YtTwcvavu13HOl75B9Evz'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-04-05 01:00:13.224163 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--8a27db0d--e52c--5340--bfad--66c075ab1c61-osd--block--8a27db0d--e52c--5340--bfad--66c075ab1c61', 'dm-uuid-LVM-maRrlPjNmQL0H9aadu5k71QFHeXfjfdEipyIlSH5OXr1U4BAJhfLJgpSdP33B0eG'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-04-05 01:00:13.224175 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-05 01:00:13.224186 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-05 01:00:13.224205 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-05 01:00:13.224222 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-05 01:00:13.224233 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-05 01:00:13.224275 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-05 01:00:13.224288 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-05 01:00:13.224300 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-05 01:00:13.224314 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6cf61d5b-3b59-4520-9cc2-8285b407910f', 'scsi-SQEMU_QEMU_HARDDISK_6cf61d5b-3b59-4520-9cc2-8285b407910f'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6cf61d5b-3b59-4520-9cc2-8285b407910f-part1', 'scsi-SQEMU_QEMU_HARDDISK_6cf61d5b-3b59-4520-9cc2-8285b407910f-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6cf61d5b-3b59-4520-9cc2-8285b407910f-part14', 'scsi-SQEMU_QEMU_HARDDISK_6cf61d5b-3b59-4520-9cc2-8285b407910f-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6cf61d5b-3b59-4520-9cc2-8285b407910f-part15', 'scsi-SQEMU_QEMU_HARDDISK_6cf61d5b-3b59-4520-9cc2-8285b407910f-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6cf61d5b-3b59-4520-9cc2-8285b407910f-part16', 'scsi-SQEMU_QEMU_HARDDISK_6cf61d5b-3b59-4520-9cc2-8285b407910f-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-05 01:00:13.224340 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'holders': ['ceph--9657aa76--f30a--575f--81fa--dc230eadde03-osd--block--9657aa76--f30a--575f--81fa--dc230eadde03'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-RpfwZn-HTk4-RnlD-ne4J-uLT2-7pJ1-8ZtVeR', 'scsi-0QEMU_QEMU_HARDDISK_7e73ac44-76fe-4853-8c7e-76a35261b68e', 'scsi-SQEMU_QEMU_HARDDISK_7e73ac44-76fe-4853-8c7e-76a35261b68e'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-05 01:00:13.224383 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'holders': ['ceph--8a27db0d--e52c--5340--bfad--66c075ab1c61-osd--block--8a27db0d--e52c--5340--bfad--66c075ab1c61'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-tlZRre-3mqn-7hAq-j2kl-vH4H-yCfn-BXadiQ', 'scsi-0QEMU_QEMU_HARDDISK_98068efd-febf-4a3d-a208-2ec8969defa3', 'scsi-SQEMU_QEMU_HARDDISK_98068efd-febf-4a3d-a208-2ec8969defa3'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-05 01:00:13.224396 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_89f7f52a-567c-4cab-9983-76602271fa86', 'scsi-SQEMU_QEMU_HARDDISK_89f7f52a-567c-4cab-9983-76602271fa86'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-05 01:00:13.224408 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-05-00-02-53-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-05 01:00:13.224420 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--84662fb7--c7ec--5f43--83c1--849532919194-osd--block--84662fb7--c7ec--5f43--83c1--849532919194', 'dm-uuid-LVM-7EWndWI44TagQTqMBy1Pv9rnP4tweZpxjHYUBR4fuE24TPDe2OzsOGLQTaEDcelq'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-04-05 01:00:13.224437 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--df39e39b--9449--5ecb--9afa--151663e06960-osd--block--df39e39b--9449--5ecb--9afa--151663e06960', 'dm-uuid-LVM-4HnV0lqPVUvHugf1jZmoUAfymWk8v99yHxbGxoLLfHrZ8usKW78V8J8BL2Nt20SL'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-04-05 01:00:13.224448 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:00:13.224464 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-05 01:00:13.224476 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-05 01:00:13.224519 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-05 01:00:13.224532 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-05 01:00:13.224543 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-05 01:00:13.224554 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-05 01:00:13.224565 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--01ae77dd--7b74--52e9--8a2e--c19e3ec8ad7a-osd--block--01ae77dd--7b74--52e9--8a2e--c19e3ec8ad7a', 'dm-uuid-LVM-ZCrIUefZlnGHrpwArsx1M23Jvupc0s9GS9IrlP81CvONv0g7P0uPjtzc9mwvdwJL'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-04-05 01:00:13.224583 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-05 01:00:13.224595 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--1dbeab33--88c6--544f--8f85--2175dc04d523-osd--block--1dbeab33--88c6--544f--8f85--2175dc04d523', 'dm-uuid-LVM-J6tB4UumkvukDnmlGPtlO0hmrLdkcf5efjG2SWt6Da1YZck7gyKdpi3JhxsDDh5X'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-04-05 01:00:13.224616 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-05 01:00:13.224627 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-05 01:00:13.224648 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_60b18cd9-cde4-4a47-bd2f-2a39c218ea3e', 'scsi-SQEMU_QEMU_HARDDISK_60b18cd9-cde4-4a47-bd2f-2a39c218ea3e'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_60b18cd9-cde4-4a47-bd2f-2a39c218ea3e-part1', 'scsi-SQEMU_QEMU_HARDDISK_60b18cd9-cde4-4a47-bd2f-2a39c218ea3e-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_60b18cd9-cde4-4a47-bd2f-2a39c218ea3e-part14', 'scsi-SQEMU_QEMU_HARDDISK_60b18cd9-cde4-4a47-bd2f-2a39c218ea3e-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_60b18cd9-cde4-4a47-bd2f-2a39c218ea3e-part15', 'scsi-SQEMU_QEMU_HARDDISK_60b18cd9-cde4-4a47-bd2f-2a39c218ea3e-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_60b18cd9-cde4-4a47-bd2f-2a39c218ea3e-part16', 'scsi-SQEMU_QEMU_HARDDISK_60b18cd9-cde4-4a47-bd2f-2a39c218ea3e-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-05 01:00:13.224667 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-05 01:00:13.224679 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'holders': ['ceph--84662fb7--c7ec--5f43--83c1--849532919194-osd--block--84662fb7--c7ec--5f43--83c1--849532919194'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-Jz3I4S-Jrdi-hSan-hrjR-5qZZ-pjy6-DhDLqU', 'scsi-0QEMU_QEMU_HARDDISK_cd3e0233-fa53-4a76-8124-17084efe5189', 'scsi-SQEMU_QEMU_HARDDISK_cd3e0233-fa53-4a76-8124-17084efe5189'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-05 01:00:13.224695 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-05 01:00:13.224716 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'holders': ['ceph--df39e39b--9449--5ecb--9afa--151663e06960-osd--block--df39e39b--9449--5ecb--9afa--151663e06960'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-VylcLb-buVg-z7iW-K22m-TKas-Ei58-re0uM9', 'scsi-0QEMU_QEMU_HARDDISK_38b6e962-bf0a-4437-92be-df56b43fc17a', 'scsi-SQEMU_QEMU_HARDDISK_38b6e962-bf0a-4437-92be-df56b43fc17a'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-05 01:00:13.224728 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-05 01:00:13.224740 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ca139ca2-9428-4862-b2c5-b387113f92e8', 'scsi-SQEMU_QEMU_HARDDISK_ca139ca2-9428-4862-b2c5-b387113f92e8'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-05 01:00:13.224751 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-05 01:00:13.224769 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-05-00-03-00-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-05 01:00:13.224780 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-05 01:00:13.224792 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-05 01:00:13.224802 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:00:13.224818 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-05 01:00:13.224838 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_af4f9d3c-bb5f-4922-ae65-7a1b824f675e', 'scsi-SQEMU_QEMU_HARDDISK_af4f9d3c-bb5f-4922-ae65-7a1b824f675e'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_af4f9d3c-bb5f-4922-ae65-7a1b824f675e-part1', 'scsi-SQEMU_QEMU_HARDDISK_af4f9d3c-bb5f-4922-ae65-7a1b824f675e-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_af4f9d3c-bb5f-4922-ae65-7a1b824f675e-part14', 'scsi-SQEMU_QEMU_HARDDISK_af4f9d3c-bb5f-4922-ae65-7a1b824f675e-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_af4f9d3c-bb5f-4922-ae65-7a1b824f675e-part15', 'scsi-SQEMU_QEMU_HARDDISK_af4f9d3c-bb5f-4922-ae65-7a1b824f675e-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_af4f9d3c-bb5f-4922-ae65-7a1b824f675e-part16', 'scsi-SQEMU_QEMU_HARDDISK_af4f9d3c-bb5f-4922-ae65-7a1b824f675e-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-05 01:00:13.224858 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'holders': ['ceph--01ae77dd--7b74--52e9--8a2e--c19e3ec8ad7a-osd--block--01ae77dd--7b74--52e9--8a2e--c19e3ec8ad7a'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-tgXYE1-zQXF-aFlN-fdHh-Wc5z-AMRd-c1q17F', 'scsi-0QEMU_QEMU_HARDDISK_50c87a36-4bc6-4e8b-871c-1038d731a8f6', 'scsi-SQEMU_QEMU_HARDDISK_50c87a36-4bc6-4e8b-871c-1038d731a8f6'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-05 01:00:13.224870 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'holders': ['ceph--1dbeab33--88c6--544f--8f85--2175dc04d523-osd--block--1dbeab33--88c6--544f--8f85--2175dc04d523'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-dyVz18-JZh1-rTMZ-E3Xl-m0dX-jcd7-Tl0RJt', 'scsi-0QEMU_QEMU_HARDDISK_16d4ab4f-df2e-4494-9775-e59359a49379', 'scsi-SQEMU_QEMU_HARDDISK_16d4ab4f-df2e-4494-9775-e59359a49379'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-05 01:00:13.224886 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f126a5b7-c683-4f0b-86b6-1ac9b9bcd2da', 'scsi-SQEMU_QEMU_HARDDISK_f126a5b7-c683-4f0b-86b6-1ac9b9bcd2da'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-05 01:00:13.224904 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-05-00-02-54-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-05 01:00:13.224916 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:00:13.224927 | orchestrator | 2026-04-05 01:00:13.224938 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-04-05 01:00:13.224948 | orchestrator | Sunday 05 April 2026 00:58:32 +0000 (0:00:00.683) 0:00:18.524 ********** 2026-04-05 01:00:13.224960 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--9657aa76--f30a--575f--81fa--dc230eadde03-osd--block--9657aa76--f30a--575f--81fa--dc230eadde03', 'dm-uuid-LVM-8E1xuPLEYx1uTwydDUNwPMLUgzpgnl2IeAYMIzK2AO6YtTwcvavu13HOl75B9Evz'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 01:00:13.224979 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--8a27db0d--e52c--5340--bfad--66c075ab1c61-osd--block--8a27db0d--e52c--5340--bfad--66c075ab1c61', 'dm-uuid-LVM-maRrlPjNmQL0H9aadu5k71QFHeXfjfdEipyIlSH5OXr1U4BAJhfLJgpSdP33B0eG'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 01:00:13.225059 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 01:00:13.225073 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 01:00:13.225090 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 01:00:13.225110 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 01:00:13.225122 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 01:00:13.225141 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 01:00:13.225153 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 01:00:13.225164 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 01:00:13.225189 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6cf61d5b-3b59-4520-9cc2-8285b407910f', 'scsi-SQEMU_QEMU_HARDDISK_6cf61d5b-3b59-4520-9cc2-8285b407910f'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6cf61d5b-3b59-4520-9cc2-8285b407910f-part1', 'scsi-SQEMU_QEMU_HARDDISK_6cf61d5b-3b59-4520-9cc2-8285b407910f-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6cf61d5b-3b59-4520-9cc2-8285b407910f-part14', 'scsi-SQEMU_QEMU_HARDDISK_6cf61d5b-3b59-4520-9cc2-8285b407910f-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6cf61d5b-3b59-4520-9cc2-8285b407910f-part15', 'scsi-SQEMU_QEMU_HARDDISK_6cf61d5b-3b59-4520-9cc2-8285b407910f-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6cf61d5b-3b59-4520-9cc2-8285b407910f-part16', 'scsi-SQEMU_QEMU_HARDDISK_6cf61d5b-3b59-4520-9cc2-8285b407910f-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 01:00:13.225210 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--9657aa76--f30a--575f--81fa--dc230eadde03-osd--block--9657aa76--f30a--575f--81fa--dc230eadde03'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-RpfwZn-HTk4-RnlD-ne4J-uLT2-7pJ1-8ZtVeR', 'scsi-0QEMU_QEMU_HARDDISK_7e73ac44-76fe-4853-8c7e-76a35261b68e', 'scsi-SQEMU_QEMU_HARDDISK_7e73ac44-76fe-4853-8c7e-76a35261b68e'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 01:00:13.225222 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--8a27db0d--e52c--5340--bfad--66c075ab1c61-osd--block--8a27db0d--e52c--5340--bfad--66c075ab1c61'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-tlZRre-3mqn-7hAq-j2kl-vH4H-yCfn-BXadiQ', 'scsi-0QEMU_QEMU_HARDDISK_98068efd-febf-4a3d-a208-2ec8969defa3', 'scsi-SQEMU_QEMU_HARDDISK_98068efd-febf-4a3d-a208-2ec8969defa3'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 01:00:13.225238 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_89f7f52a-567c-4cab-9983-76602271fa86', 'scsi-SQEMU_QEMU_HARDDISK_89f7f52a-567c-4cab-9983-76602271fa86'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 01:00:13.225257 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--84662fb7--c7ec--5f43--83c1--849532919194-osd--block--84662fb7--c7ec--5f43--83c1--849532919194', 'dm-uuid-LVM-7EWndWI44TagQTqMBy1Pv9rnP4tweZpxjHYUBR4fuE24TPDe2OzsOGLQTaEDcelq'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 01:00:13.225269 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--df39e39b--9449--5ecb--9afa--151663e06960-osd--block--df39e39b--9449--5ecb--9afa--151663e06960', 'dm-uuid-LVM-4HnV0lqPVUvHugf1jZmoUAfymWk8v99yHxbGxoLLfHrZ8usKW78V8J8BL2Nt20SL'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 01:00:13.225286 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 01:00:13.225297 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 01:00:13.225309 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-05-00-02-53-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 01:00:13.225320 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:00:13.225335 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 01:00:13.225353 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 01:00:13.225365 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 01:00:13.225386 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 01:00:13.225397 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 01:00:13.225409 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 01:00:13.225433 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_60b18cd9-cde4-4a47-bd2f-2a39c218ea3e', 'scsi-SQEMU_QEMU_HARDDISK_60b18cd9-cde4-4a47-bd2f-2a39c218ea3e'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_60b18cd9-cde4-4a47-bd2f-2a39c218ea3e-part1', 'scsi-SQEMU_QEMU_HARDDISK_60b18cd9-cde4-4a47-bd2f-2a39c218ea3e-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_60b18cd9-cde4-4a47-bd2f-2a39c218ea3e-part14', 'scsi-SQEMU_QEMU_HARDDISK_60b18cd9-cde4-4a47-bd2f-2a39c218ea3e-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_60b18cd9-cde4-4a47-bd2f-2a39c218ea3e-part15', 'scsi-SQEMU_QEMU_HARDDISK_60b18cd9-cde4-4a47-bd2f-2a39c218ea3e-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_60b18cd9-cde4-4a47-bd2f-2a39c218ea3e-part16', 'scsi-SQEMU_QEMU_HARDDISK_60b18cd9-cde4-4a47-bd2f-2a39c218ea3e-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 01:00:13.225451 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--84662fb7--c7ec--5f43--83c1--849532919194-osd--block--84662fb7--c7ec--5f43--83c1--849532919194'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-Jz3I4S-Jrdi-hSan-hrjR-5qZZ-pjy6-DhDLqU', 'scsi-0QEMU_QEMU_HARDDISK_cd3e0233-fa53-4a76-8124-17084efe5189', 'scsi-SQEMU_QEMU_HARDDISK_cd3e0233-fa53-4a76-8124-17084efe5189'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 01:00:13.225463 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--01ae77dd--7b74--52e9--8a2e--c19e3ec8ad7a-osd--block--01ae77dd--7b74--52e9--8a2e--c19e3ec8ad7a', 'dm-uuid-LVM-ZCrIUefZlnGHrpwArsx1M23Jvupc0s9GS9IrlP81CvONv0g7P0uPjtzc9mwvdwJL'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 01:00:13.225478 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--df39e39b--9449--5ecb--9afa--151663e06960-osd--block--df39e39b--9449--5ecb--9afa--151663e06960'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-VylcLb-buVg-z7iW-K22m-TKas-Ei58-re0uM9', 'scsi-0QEMU_QEMU_HARDDISK_38b6e962-bf0a-4437-92be-df56b43fc17a', 'scsi-SQEMU_QEMU_HARDDISK_38b6e962-bf0a-4437-92be-df56b43fc17a'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 01:00:13.225496 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--1dbeab33--88c6--544f--8f85--2175dc04d523-osd--block--1dbeab33--88c6--544f--8f85--2175dc04d523', 'dm-uuid-LVM-J6tB4UumkvukDnmlGPtlO0hmrLdkcf5efjG2SWt6Da1YZck7gyKdpi3JhxsDDh5X'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 01:00:13.225514 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ca139ca2-9428-4862-b2c5-b387113f92e8', 'scsi-SQEMU_QEMU_HARDDISK_ca139ca2-9428-4862-b2c5-b387113f92e8'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 01:00:13.225524 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-05-00-03-00-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 01:00:13.225535 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 01:00:13.225544 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:00:13.225555 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 01:00:13.225569 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 01:00:13.225584 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 01:00:13.225600 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 01:00:13.225610 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 01:00:13.225620 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 01:00:13.225630 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 01:00:13.225652 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_af4f9d3c-bb5f-4922-ae65-7a1b824f675e', 'scsi-SQEMU_QEMU_HARDDISK_af4f9d3c-bb5f-4922-ae65-7a1b824f675e'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_af4f9d3c-bb5f-4922-ae65-7a1b824f675e-part1', 'scsi-SQEMU_QEMU_HARDDISK_af4f9d3c-bb5f-4922-ae65-7a1b824f675e-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_af4f9d3c-bb5f-4922-ae65-7a1b824f675e-part14', 'scsi-SQEMU_QEMU_HARDDISK_af4f9d3c-bb5f-4922-ae65-7a1b824f675e-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_af4f9d3c-bb5f-4922-ae65-7a1b824f675e-part15', 'scsi-SQEMU_QEMU_HARDDISK_af4f9d3c-bb5f-4922-ae65-7a1b824f675e-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_af4f9d3c-bb5f-4922-ae65-7a1b824f675e-part16', 'scsi-SQEMU_QEMU_HARDDISK_af4f9d3c-bb5f-4922-ae65-7a1b824f675e-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 01:00:13.225668 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--01ae77dd--7b74--52e9--8a2e--c19e3ec8ad7a-osd--block--01ae77dd--7b74--52e9--8a2e--c19e3ec8ad7a'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-tgXYE1-zQXF-aFlN-fdHh-Wc5z-AMRd-c1q17F', 'scsi-0QEMU_QEMU_HARDDISK_50c87a36-4bc6-4e8b-871c-1038d731a8f6', 'scsi-SQEMU_QEMU_HARDDISK_50c87a36-4bc6-4e8b-871c-1038d731a8f6'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 01:00:13.225679 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--1dbeab33--88c6--544f--8f85--2175dc04d523-osd--block--1dbeab33--88c6--544f--8f85--2175dc04d523'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-dyVz18-JZh1-rTMZ-E3Xl-m0dX-jcd7-Tl0RJt', 'scsi-0QEMU_QEMU_HARDDISK_16d4ab4f-df2e-4494-9775-e59359a49379', 'scsi-SQEMU_QEMU_HARDDISK_16d4ab4f-df2e-4494-9775-e59359a49379'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 01:00:13.225693 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f126a5b7-c683-4f0b-86b6-1ac9b9bcd2da', 'scsi-SQEMU_QEMU_HARDDISK_f126a5b7-c683-4f0b-86b6-1ac9b9bcd2da'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 01:00:13.225708 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-05-00-02-54-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 01:00:13.225724 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:00:13.225734 | orchestrator | 2026-04-05 01:00:13.225744 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-04-05 01:00:13.225753 | orchestrator | Sunday 05 April 2026 00:58:32 +0000 (0:00:00.747) 0:00:19.271 ********** 2026-04-05 01:00:13.225763 | orchestrator | ok: [testbed-node-3] 2026-04-05 01:00:13.225773 | orchestrator | ok: [testbed-node-4] 2026-04-05 01:00:13.225782 | orchestrator | ok: [testbed-node-5] 2026-04-05 01:00:13.225792 | orchestrator | 2026-04-05 01:00:13.225801 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-04-05 01:00:13.225811 | orchestrator | Sunday 05 April 2026 00:58:33 +0000 (0:00:00.676) 0:00:19.947 ********** 2026-04-05 01:00:13.225820 | orchestrator | ok: [testbed-node-3] 2026-04-05 01:00:13.225830 | orchestrator | ok: [testbed-node-4] 2026-04-05 01:00:13.225839 | orchestrator | ok: [testbed-node-5] 2026-04-05 01:00:13.225849 | orchestrator | 2026-04-05 01:00:13.225858 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-04-05 01:00:13.225868 | orchestrator | Sunday 05 April 2026 00:58:34 +0000 (0:00:00.520) 0:00:20.468 ********** 2026-04-05 01:00:13.225877 | orchestrator | ok: [testbed-node-3] 2026-04-05 01:00:13.225887 | orchestrator | ok: [testbed-node-4] 2026-04-05 01:00:13.225896 | orchestrator | ok: [testbed-node-5] 2026-04-05 01:00:13.225906 | orchestrator | 2026-04-05 01:00:13.225915 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-04-05 01:00:13.225925 | orchestrator | Sunday 05 April 2026 00:58:34 +0000 (0:00:00.665) 0:00:21.133 ********** 2026-04-05 01:00:13.225934 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:00:13.225944 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:00:13.225953 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:00:13.225963 | orchestrator | 2026-04-05 01:00:13.225972 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-04-05 01:00:13.225982 | orchestrator | Sunday 05 April 2026 00:58:35 +0000 (0:00:00.334) 0:00:21.468 ********** 2026-04-05 01:00:13.226061 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:00:13.226077 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:00:13.226087 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:00:13.226097 | orchestrator | 2026-04-05 01:00:13.226106 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-04-05 01:00:13.226116 | orchestrator | Sunday 05 April 2026 00:58:35 +0000 (0:00:00.416) 0:00:21.885 ********** 2026-04-05 01:00:13.226125 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:00:13.226135 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:00:13.226144 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:00:13.226154 | orchestrator | 2026-04-05 01:00:13.226163 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-04-05 01:00:13.226173 | orchestrator | Sunday 05 April 2026 00:58:36 +0000 (0:00:00.535) 0:00:22.420 ********** 2026-04-05 01:00:13.226183 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2026-04-05 01:00:13.226192 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2026-04-05 01:00:13.226202 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2026-04-05 01:00:13.226212 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2026-04-05 01:00:13.226221 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2026-04-05 01:00:13.226231 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2026-04-05 01:00:13.226246 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2026-04-05 01:00:13.226256 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2026-04-05 01:00:13.226266 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2026-04-05 01:00:13.226275 | orchestrator | 2026-04-05 01:00:13.226285 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-04-05 01:00:13.226294 | orchestrator | Sunday 05 April 2026 00:58:36 +0000 (0:00:00.895) 0:00:23.315 ********** 2026-04-05 01:00:13.226304 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-04-05 01:00:13.226313 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-04-05 01:00:13.226323 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-04-05 01:00:13.226332 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:00:13.226342 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-04-05 01:00:13.226351 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-04-05 01:00:13.226361 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-04-05 01:00:13.226370 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:00:13.226385 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-04-05 01:00:13.226395 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-04-05 01:00:13.226404 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-04-05 01:00:13.226413 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:00:13.226423 | orchestrator | 2026-04-05 01:00:13.226433 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-04-05 01:00:13.226443 | orchestrator | Sunday 05 April 2026 00:58:37 +0000 (0:00:00.353) 0:00:23.669 ********** 2026-04-05 01:00:13.226453 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-05 01:00:13.226462 | orchestrator | 2026-04-05 01:00:13.226472 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-04-05 01:00:13.226482 | orchestrator | Sunday 05 April 2026 00:58:38 +0000 (0:00:00.806) 0:00:24.475 ********** 2026-04-05 01:00:13.226499 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:00:13.226509 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:00:13.226519 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:00:13.226528 | orchestrator | 2026-04-05 01:00:13.226538 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-04-05 01:00:13.226547 | orchestrator | Sunday 05 April 2026 00:58:38 +0000 (0:00:00.336) 0:00:24.812 ********** 2026-04-05 01:00:13.226557 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:00:13.226566 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:00:13.226576 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:00:13.226585 | orchestrator | 2026-04-05 01:00:13.226595 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-04-05 01:00:13.226605 | orchestrator | Sunday 05 April 2026 00:58:38 +0000 (0:00:00.297) 0:00:25.110 ********** 2026-04-05 01:00:13.226614 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:00:13.226623 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:00:13.226633 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:00:13.226642 | orchestrator | 2026-04-05 01:00:13.226652 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-04-05 01:00:13.226661 | orchestrator | Sunday 05 April 2026 00:58:39 +0000 (0:00:00.340) 0:00:25.451 ********** 2026-04-05 01:00:13.226671 | orchestrator | ok: [testbed-node-3] 2026-04-05 01:00:13.226681 | orchestrator | ok: [testbed-node-4] 2026-04-05 01:00:13.226690 | orchestrator | ok: [testbed-node-5] 2026-04-05 01:00:13.226700 | orchestrator | 2026-04-05 01:00:13.226709 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-04-05 01:00:13.226719 | orchestrator | Sunday 05 April 2026 00:58:39 +0000 (0:00:00.791) 0:00:26.242 ********** 2026-04-05 01:00:13.226734 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-05 01:00:13.226744 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-05 01:00:13.226753 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-05 01:00:13.226763 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:00:13.226772 | orchestrator | 2026-04-05 01:00:13.226782 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-04-05 01:00:13.226792 | orchestrator | Sunday 05 April 2026 00:58:40 +0000 (0:00:00.386) 0:00:26.629 ********** 2026-04-05 01:00:13.226801 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-05 01:00:13.226811 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-05 01:00:13.226821 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-05 01:00:13.226830 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:00:13.226840 | orchestrator | 2026-04-05 01:00:13.226850 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-04-05 01:00:13.226859 | orchestrator | Sunday 05 April 2026 00:58:40 +0000 (0:00:00.408) 0:00:27.037 ********** 2026-04-05 01:00:13.226869 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-05 01:00:13.226878 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-05 01:00:13.226887 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-05 01:00:13.226897 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:00:13.226906 | orchestrator | 2026-04-05 01:00:13.226916 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-04-05 01:00:13.226926 | orchestrator | Sunday 05 April 2026 00:58:41 +0000 (0:00:00.380) 0:00:27.417 ********** 2026-04-05 01:00:13.226935 | orchestrator | ok: [testbed-node-3] 2026-04-05 01:00:13.226945 | orchestrator | ok: [testbed-node-4] 2026-04-05 01:00:13.226954 | orchestrator | ok: [testbed-node-5] 2026-04-05 01:00:13.226963 | orchestrator | 2026-04-05 01:00:13.226973 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-04-05 01:00:13.226983 | orchestrator | Sunday 05 April 2026 00:58:41 +0000 (0:00:00.300) 0:00:27.718 ********** 2026-04-05 01:00:13.227061 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-04-05 01:00:13.227071 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-04-05 01:00:13.227081 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-04-05 01:00:13.227091 | orchestrator | 2026-04-05 01:00:13.227100 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-04-05 01:00:13.227110 | orchestrator | Sunday 05 April 2026 00:58:41 +0000 (0:00:00.544) 0:00:28.263 ********** 2026-04-05 01:00:13.227119 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-05 01:00:13.227129 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-05 01:00:13.227138 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-05 01:00:13.227148 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-04-05 01:00:13.227157 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-04-05 01:00:13.227167 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-04-05 01:00:13.227180 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-04-05 01:00:13.227190 | orchestrator | 2026-04-05 01:00:13.227200 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-04-05 01:00:13.227209 | orchestrator | Sunday 05 April 2026 00:58:42 +0000 (0:00:01.050) 0:00:29.313 ********** 2026-04-05 01:00:13.227219 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-05 01:00:13.227228 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-05 01:00:13.227238 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-05 01:00:13.227257 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-04-05 01:00:13.227266 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-04-05 01:00:13.227276 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-04-05 01:00:13.227291 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-04-05 01:00:13.227301 | orchestrator | 2026-04-05 01:00:13.227311 | orchestrator | TASK [Include tasks from the ceph-osd role] ************************************ 2026-04-05 01:00:13.227321 | orchestrator | Sunday 05 April 2026 00:58:45 +0000 (0:00:02.131) 0:00:31.445 ********** 2026-04-05 01:00:13.227330 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:00:13.227340 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:00:13.227349 | orchestrator | included: /ansible/tasks/openstack_config.yml for testbed-node-5 2026-04-05 01:00:13.227358 | orchestrator | 2026-04-05 01:00:13.227368 | orchestrator | TASK [create openstack pool(s)] ************************************************ 2026-04-05 01:00:13.227377 | orchestrator | Sunday 05 April 2026 00:58:45 +0000 (0:00:00.384) 0:00:31.829 ********** 2026-04-05 01:00:13.227388 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'backups', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-04-05 01:00:13.227398 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'volumes', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-04-05 01:00:13.227408 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'images', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-04-05 01:00:13.227418 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'metrics', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-04-05 01:00:13.227429 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'vms', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-04-05 01:00:13.227438 | orchestrator | 2026-04-05 01:00:13.227448 | orchestrator | TASK [generate keys] *********************************************************** 2026-04-05 01:00:13.227457 | orchestrator | Sunday 05 April 2026 00:59:23 +0000 (0:00:38.494) 0:01:10.323 ********** 2026-04-05 01:00:13.227467 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-05 01:00:13.227476 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-05 01:00:13.227485 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-05 01:00:13.227495 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-05 01:00:13.227504 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-05 01:00:13.227514 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-05 01:00:13.227523 | orchestrator | changed: [testbed-node-5 -> {{ groups[mon_group_name][0] }}] 2026-04-05 01:00:13.227532 | orchestrator | 2026-04-05 01:00:13.227542 | orchestrator | TASK [get keys from monitors] ************************************************** 2026-04-05 01:00:13.227551 | orchestrator | Sunday 05 April 2026 00:59:44 +0000 (0:00:20.489) 0:01:30.813 ********** 2026-04-05 01:00:13.227567 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-05 01:00:13.227576 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-05 01:00:13.227585 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-05 01:00:13.227595 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-05 01:00:13.227604 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-05 01:00:13.227617 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-05 01:00:13.227627 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2026-04-05 01:00:13.227637 | orchestrator | 2026-04-05 01:00:13.227646 | orchestrator | TASK [copy ceph key(s) if needed] ********************************************** 2026-04-05 01:00:13.227655 | orchestrator | Sunday 05 April 2026 00:59:54 +0000 (0:00:09.752) 0:01:40.566 ********** 2026-04-05 01:00:13.227665 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-05 01:00:13.227674 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-04-05 01:00:13.227684 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-04-05 01:00:13.227693 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-05 01:00:13.227703 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-04-05 01:00:13.227717 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-04-05 01:00:13.227727 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-05 01:00:13.227737 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-04-05 01:00:13.227746 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-04-05 01:00:13.227756 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-05 01:00:13.227765 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-04-05 01:00:13.227775 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-04-05 01:00:13.227784 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-05 01:00:13.227794 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-04-05 01:00:13.227803 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-04-05 01:00:13.227812 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-05 01:00:13.227822 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-04-05 01:00:13.227831 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-04-05 01:00:13.227841 | orchestrator | changed: [testbed-node-5 -> {{ item.1 }}] 2026-04-05 01:00:13.227850 | orchestrator | 2026-04-05 01:00:13.227860 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-05 01:00:13.227870 | orchestrator | testbed-node-3 : ok=25  changed=0 unreachable=0 failed=0 skipped=28  rescued=0 ignored=0 2026-04-05 01:00:13.227881 | orchestrator | testbed-node-4 : ok=18  changed=0 unreachable=0 failed=0 skipped=21  rescued=0 ignored=0 2026-04-05 01:00:13.227891 | orchestrator | testbed-node-5 : ok=23  changed=3  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2026-04-05 01:00:13.227900 | orchestrator | 2026-04-05 01:00:13.227910 | orchestrator | 2026-04-05 01:00:13.227920 | orchestrator | 2026-04-05 01:00:13.227929 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-05 01:00:13.227939 | orchestrator | Sunday 05 April 2026 01:00:12 +0000 (0:00:17.877) 0:01:58.446 ********** 2026-04-05 01:00:13.227954 | orchestrator | =============================================================================== 2026-04-05 01:00:13.227963 | orchestrator | create openstack pool(s) ----------------------------------------------- 38.49s 2026-04-05 01:00:13.227973 | orchestrator | generate keys ---------------------------------------------------------- 20.49s 2026-04-05 01:00:13.227983 | orchestrator | copy ceph key(s) if needed --------------------------------------------- 17.88s 2026-04-05 01:00:13.228018 | orchestrator | get keys from monitors -------------------------------------------------- 9.75s 2026-04-05 01:00:13.228028 | orchestrator | ceph-facts : Find a running mon container ------------------------------- 3.32s 2026-04-05 01:00:13.228038 | orchestrator | ceph-facts : Set_fact ceph_admin_command -------------------------------- 2.13s 2026-04-05 01:00:13.228047 | orchestrator | ceph-facts : Get current fsid if cluster is already running ------------- 1.30s 2026-04-05 01:00:13.228057 | orchestrator | ceph-facts : Check if it is atomic host --------------------------------- 1.10s 2026-04-05 01:00:13.228066 | orchestrator | ceph-facts : Set_fact ceph_run_cmd -------------------------------------- 1.05s 2026-04-05 01:00:13.228076 | orchestrator | ceph-facts : Set_fact _monitor_addresses - ipv4 ------------------------- 0.90s 2026-04-05 01:00:13.228086 | orchestrator | ceph-facts : Check if the ceph mon socket is in-use --------------------- 0.85s 2026-04-05 01:00:13.228095 | orchestrator | ceph-facts : Set_fact monitor_name ansible_facts['hostname'] ------------ 0.83s 2026-04-05 01:00:13.228104 | orchestrator | ceph-facts : Import_tasks set_radosgw_address.yml ----------------------- 0.81s 2026-04-05 01:00:13.228114 | orchestrator | ceph-facts : Check if podman binary is present -------------------------- 0.80s 2026-04-05 01:00:13.228123 | orchestrator | ceph-facts : Set_fact _radosgw_address to radosgw_address --------------- 0.79s 2026-04-05 01:00:13.228133 | orchestrator | ceph-facts : Set_fact devices generate device list when osd_auto_discovery --- 0.75s 2026-04-05 01:00:13.228142 | orchestrator | ceph-facts : Collect existed devices ------------------------------------ 0.68s 2026-04-05 01:00:13.228158 | orchestrator | ceph-facts : Check if the ceph conf exists ------------------------------ 0.68s 2026-04-05 01:00:13.228168 | orchestrator | ceph-facts : Read osd pool default crush rule --------------------------- 0.67s 2026-04-05 01:00:13.228177 | orchestrator | ceph-facts : Include facts.yml ------------------------------------------ 0.64s 2026-04-05 01:00:13.228187 | orchestrator | 2026-04-05 01:00:13 | INFO  | Task 8f7fa720-c7cb-4f2e-ad10-7a0d57e3e6f1 is in state STARTED 2026-04-05 01:00:13.228196 | orchestrator | 2026-04-05 01:00:13 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:00:16.277669 | orchestrator | 2026-04-05 01:00:16 | INFO  | Task daeb651d-59f5-4d09-a822-e33f38a3d3e8 is in state STARTED 2026-04-05 01:00:16.280902 | orchestrator | 2026-04-05 01:00:16 | INFO  | Task a8263af5-c38f-44c3-8c47-4b2e2048268e is in state STARTED 2026-04-05 01:00:16.283626 | orchestrator | 2026-04-05 01:00:16 | INFO  | Task 8f7fa720-c7cb-4f2e-ad10-7a0d57e3e6f1 is in state STARTED 2026-04-05 01:00:16.283668 | orchestrator | 2026-04-05 01:00:16 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:00:19.330234 | orchestrator | 2026-04-05 01:00:19 | INFO  | Task daeb651d-59f5-4d09-a822-e33f38a3d3e8 is in state STARTED 2026-04-05 01:00:19.332977 | orchestrator | 2026-04-05 01:00:19 | INFO  | Task a8263af5-c38f-44c3-8c47-4b2e2048268e is in state STARTED 2026-04-05 01:00:19.333828 | orchestrator | 2026-04-05 01:00:19 | INFO  | Task 8f7fa720-c7cb-4f2e-ad10-7a0d57e3e6f1 is in state STARTED 2026-04-05 01:00:19.333902 | orchestrator | 2026-04-05 01:00:19 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:00:22.385439 | orchestrator | 2026-04-05 01:00:22 | INFO  | Task daeb651d-59f5-4d09-a822-e33f38a3d3e8 is in state STARTED 2026-04-05 01:00:22.387589 | orchestrator | 2026-04-05 01:00:22 | INFO  | Task a8263af5-c38f-44c3-8c47-4b2e2048268e is in state STARTED 2026-04-05 01:00:22.389262 | orchestrator | 2026-04-05 01:00:22 | INFO  | Task 8f7fa720-c7cb-4f2e-ad10-7a0d57e3e6f1 is in state STARTED 2026-04-05 01:00:22.389332 | orchestrator | 2026-04-05 01:00:22 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:00:25.435452 | orchestrator | 2026-04-05 01:00:25 | INFO  | Task daeb651d-59f5-4d09-a822-e33f38a3d3e8 is in state STARTED 2026-04-05 01:00:25.435961 | orchestrator | 2026-04-05 01:00:25 | INFO  | Task a8263af5-c38f-44c3-8c47-4b2e2048268e is in state STARTED 2026-04-05 01:00:25.440129 | orchestrator | 2026-04-05 01:00:25 | INFO  | Task 8f7fa720-c7cb-4f2e-ad10-7a0d57e3e6f1 is in state STARTED 2026-04-05 01:00:25.440391 | orchestrator | 2026-04-05 01:00:25 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:00:28.491612 | orchestrator | 2026-04-05 01:00:28 | INFO  | Task daeb651d-59f5-4d09-a822-e33f38a3d3e8 is in state STARTED 2026-04-05 01:00:28.493535 | orchestrator | 2026-04-05 01:00:28 | INFO  | Task a8263af5-c38f-44c3-8c47-4b2e2048268e is in state STARTED 2026-04-05 01:00:28.495827 | orchestrator | 2026-04-05 01:00:28 | INFO  | Task 8f7fa720-c7cb-4f2e-ad10-7a0d57e3e6f1 is in state STARTED 2026-04-05 01:00:28.495905 | orchestrator | 2026-04-05 01:00:28 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:00:31.545164 | orchestrator | 2026-04-05 01:00:31 | INFO  | Task daeb651d-59f5-4d09-a822-e33f38a3d3e8 is in state STARTED 2026-04-05 01:00:31.547625 | orchestrator | 2026-04-05 01:00:31 | INFO  | Task a8263af5-c38f-44c3-8c47-4b2e2048268e is in state STARTED 2026-04-05 01:00:31.549076 | orchestrator | 2026-04-05 01:00:31 | INFO  | Task 8f7fa720-c7cb-4f2e-ad10-7a0d57e3e6f1 is in state STARTED 2026-04-05 01:00:31.549122 | orchestrator | 2026-04-05 01:00:31 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:00:34.599620 | orchestrator | 2026-04-05 01:00:34 | INFO  | Task daeb651d-59f5-4d09-a822-e33f38a3d3e8 is in state STARTED 2026-04-05 01:00:34.601052 | orchestrator | 2026-04-05 01:00:34 | INFO  | Task a8263af5-c38f-44c3-8c47-4b2e2048268e is in state STARTED 2026-04-05 01:00:34.602252 | orchestrator | 2026-04-05 01:00:34 | INFO  | Task 8f7fa720-c7cb-4f2e-ad10-7a0d57e3e6f1 is in state STARTED 2026-04-05 01:00:34.602282 | orchestrator | 2026-04-05 01:00:34 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:00:37.676594 | orchestrator | 2026-04-05 01:00:37 | INFO  | Task daeb651d-59f5-4d09-a822-e33f38a3d3e8 is in state STARTED 2026-04-05 01:00:37.678595 | orchestrator | 2026-04-05 01:00:37 | INFO  | Task a8263af5-c38f-44c3-8c47-4b2e2048268e is in state STARTED 2026-04-05 01:00:37.679772 | orchestrator | 2026-04-05 01:00:37 | INFO  | Task 8f7fa720-c7cb-4f2e-ad10-7a0d57e3e6f1 is in state STARTED 2026-04-05 01:00:37.679820 | orchestrator | 2026-04-05 01:00:37 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:00:40.718599 | orchestrator | 2026-04-05 01:00:40 | INFO  | Task daeb651d-59f5-4d09-a822-e33f38a3d3e8 is in state STARTED 2026-04-05 01:00:40.719043 | orchestrator | 2026-04-05 01:00:40 | INFO  | Task a8263af5-c38f-44c3-8c47-4b2e2048268e is in state STARTED 2026-04-05 01:00:40.720557 | orchestrator | 2026-04-05 01:00:40 | INFO  | Task 8f7fa720-c7cb-4f2e-ad10-7a0d57e3e6f1 is in state STARTED 2026-04-05 01:00:40.720601 | orchestrator | 2026-04-05 01:00:40 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:00:43.771520 | orchestrator | 2026-04-05 01:00:43 | INFO  | Task daeb651d-59f5-4d09-a822-e33f38a3d3e8 is in state STARTED 2026-04-05 01:00:43.774606 | orchestrator | 2026-04-05 01:00:43 | INFO  | Task a8263af5-c38f-44c3-8c47-4b2e2048268e is in state STARTED 2026-04-05 01:00:43.777138 | orchestrator | 2026-04-05 01:00:43 | INFO  | Task 8f7fa720-c7cb-4f2e-ad10-7a0d57e3e6f1 is in state STARTED 2026-04-05 01:00:43.777225 | orchestrator | 2026-04-05 01:00:43 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:00:46.825036 | orchestrator | 2026-04-05 01:00:46 | INFO  | Task daeb651d-59f5-4d09-a822-e33f38a3d3e8 is in state STARTED 2026-04-05 01:00:46.826185 | orchestrator | 2026-04-05 01:00:46 | INFO  | Task a8263af5-c38f-44c3-8c47-4b2e2048268e is in state STARTED 2026-04-05 01:00:46.828550 | orchestrator | 2026-04-05 01:00:46 | INFO  | Task 8f7fa720-c7cb-4f2e-ad10-7a0d57e3e6f1 is in state STARTED 2026-04-05 01:00:46.828586 | orchestrator | 2026-04-05 01:00:46 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:00:49.870565 | orchestrator | 2026-04-05 01:00:49 | INFO  | Task daeb651d-59f5-4d09-a822-e33f38a3d3e8 is in state STARTED 2026-04-05 01:00:49.871448 | orchestrator | 2026-04-05 01:00:49 | INFO  | Task a8263af5-c38f-44c3-8c47-4b2e2048268e is in state STARTED 2026-04-05 01:00:49.872501 | orchestrator | 2026-04-05 01:00:49 | INFO  | Task 8f7fa720-c7cb-4f2e-ad10-7a0d57e3e6f1 is in state STARTED 2026-04-05 01:00:49.872542 | orchestrator | 2026-04-05 01:00:49 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:00:52.935789 | orchestrator | 2026-04-05 01:00:52 | INFO  | Task daeb651d-59f5-4d09-a822-e33f38a3d3e8 is in state STARTED 2026-04-05 01:00:52.937619 | orchestrator | 2026-04-05 01:00:52 | INFO  | Task a8263af5-c38f-44c3-8c47-4b2e2048268e is in state SUCCESS 2026-04-05 01:00:52.939928 | orchestrator | 2026-04-05 01:00:52 | INFO  | Task 8f7fa720-c7cb-4f2e-ad10-7a0d57e3e6f1 is in state STARTED 2026-04-05 01:00:52.940062 | orchestrator | 2026-04-05 01:00:52 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:00:56.015347 | orchestrator | 2026-04-05 01:00:56 | INFO  | Task daeb651d-59f5-4d09-a822-e33f38a3d3e8 is in state STARTED 2026-04-05 01:00:56.018373 | orchestrator | 2026-04-05 01:00:56 | INFO  | Task 9cd21d21-1963-4117-bb55-f226dfb9e1a9 is in state STARTED 2026-04-05 01:00:56.020514 | orchestrator | 2026-04-05 01:00:56 | INFO  | Task 8f7fa720-c7cb-4f2e-ad10-7a0d57e3e6f1 is in state STARTED 2026-04-05 01:00:56.020728 | orchestrator | 2026-04-05 01:00:56 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:00:59.060127 | orchestrator | 2026-04-05 01:00:59 | INFO  | Task daeb651d-59f5-4d09-a822-e33f38a3d3e8 is in state STARTED 2026-04-05 01:00:59.061165 | orchestrator | 2026-04-05 01:00:59 | INFO  | Task 9cd21d21-1963-4117-bb55-f226dfb9e1a9 is in state STARTED 2026-04-05 01:00:59.062916 | orchestrator | 2026-04-05 01:00:59 | INFO  | Task 8f7fa720-c7cb-4f2e-ad10-7a0d57e3e6f1 is in state STARTED 2026-04-05 01:00:59.062941 | orchestrator | 2026-04-05 01:00:59 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:01:02.112489 | orchestrator | 2026-04-05 01:01:02 | INFO  | Task daeb651d-59f5-4d09-a822-e33f38a3d3e8 is in state STARTED 2026-04-05 01:01:02.115503 | orchestrator | 2026-04-05 01:01:02 | INFO  | Task 9cd21d21-1963-4117-bb55-f226dfb9e1a9 is in state STARTED 2026-04-05 01:01:02.117183 | orchestrator | 2026-04-05 01:01:02 | INFO  | Task 8f7fa720-c7cb-4f2e-ad10-7a0d57e3e6f1 is in state STARTED 2026-04-05 01:01:02.117232 | orchestrator | 2026-04-05 01:01:02 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:01:05.168877 | orchestrator | 2026-04-05 01:01:05 | INFO  | Task daeb651d-59f5-4d09-a822-e33f38a3d3e8 is in state STARTED 2026-04-05 01:01:05.172550 | orchestrator | 2026-04-05 01:01:05 | INFO  | Task 9cd21d21-1963-4117-bb55-f226dfb9e1a9 is in state STARTED 2026-04-05 01:01:05.176609 | orchestrator | 2026-04-05 01:01:05 | INFO  | Task 8f7fa720-c7cb-4f2e-ad10-7a0d57e3e6f1 is in state STARTED 2026-04-05 01:01:05.176712 | orchestrator | 2026-04-05 01:01:05 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:01:08.235304 | orchestrator | 2026-04-05 01:01:08 | INFO  | Task daeb651d-59f5-4d09-a822-e33f38a3d3e8 is in state STARTED 2026-04-05 01:01:08.237105 | orchestrator | 2026-04-05 01:01:08 | INFO  | Task 9cd21d21-1963-4117-bb55-f226dfb9e1a9 is in state STARTED 2026-04-05 01:01:08.238329 | orchestrator | 2026-04-05 01:01:08 | INFO  | Task 8f7fa720-c7cb-4f2e-ad10-7a0d57e3e6f1 is in state STARTED 2026-04-05 01:01:08.238372 | orchestrator | 2026-04-05 01:01:08 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:01:11.294821 | orchestrator | 2026-04-05 01:01:11 | INFO  | Task daeb651d-59f5-4d09-a822-e33f38a3d3e8 is in state STARTED 2026-04-05 01:01:11.297109 | orchestrator | 2026-04-05 01:01:11 | INFO  | Task 9cd21d21-1963-4117-bb55-f226dfb9e1a9 is in state STARTED 2026-04-05 01:01:11.299606 | orchestrator | 2026-04-05 01:01:11 | INFO  | Task 8f7fa720-c7cb-4f2e-ad10-7a0d57e3e6f1 is in state STARTED 2026-04-05 01:01:11.299670 | orchestrator | 2026-04-05 01:01:11 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:01:14.346676 | orchestrator | 2026-04-05 01:01:14 | INFO  | Task daeb651d-59f5-4d09-a822-e33f38a3d3e8 is in state STARTED 2026-04-05 01:01:14.350376 | orchestrator | 2026-04-05 01:01:14 | INFO  | Task 9cd21d21-1963-4117-bb55-f226dfb9e1a9 is in state STARTED 2026-04-05 01:01:14.353191 | orchestrator | 2026-04-05 01:01:14 | INFO  | Task 8f7fa720-c7cb-4f2e-ad10-7a0d57e3e6f1 is in state STARTED 2026-04-05 01:01:14.353239 | orchestrator | 2026-04-05 01:01:14 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:01:17.398678 | orchestrator | 2026-04-05 01:01:17 | INFO  | Task daeb651d-59f5-4d09-a822-e33f38a3d3e8 is in state STARTED 2026-04-05 01:01:17.400358 | orchestrator | 2026-04-05 01:01:17 | INFO  | Task 9cd21d21-1963-4117-bb55-f226dfb9e1a9 is in state STARTED 2026-04-05 01:01:17.401502 | orchestrator | 2026-04-05 01:01:17 | INFO  | Task 8f7fa720-c7cb-4f2e-ad10-7a0d57e3e6f1 is in state STARTED 2026-04-05 01:01:17.401562 | orchestrator | 2026-04-05 01:01:17 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:01:20.454875 | orchestrator | 2026-04-05 01:01:20 | INFO  | Task daeb651d-59f5-4d09-a822-e33f38a3d3e8 is in state STARTED 2026-04-05 01:01:20.458198 | orchestrator | 2026-04-05 01:01:20 | INFO  | Task 9cd21d21-1963-4117-bb55-f226dfb9e1a9 is in state STARTED 2026-04-05 01:01:20.460997 | orchestrator | 2026-04-05 01:01:20 | INFO  | Task 8f7fa720-c7cb-4f2e-ad10-7a0d57e3e6f1 is in state STARTED 2026-04-05 01:01:20.461044 | orchestrator | 2026-04-05 01:01:20 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:01:23.504264 | orchestrator | 2026-04-05 01:01:23 | INFO  | Task daeb651d-59f5-4d09-a822-e33f38a3d3e8 is in state STARTED 2026-04-05 01:01:23.504884 | orchestrator | 2026-04-05 01:01:23 | INFO  | Task 9cd21d21-1963-4117-bb55-f226dfb9e1a9 is in state STARTED 2026-04-05 01:01:23.505594 | orchestrator | 2026-04-05 01:01:23 | INFO  | Task 8f7fa720-c7cb-4f2e-ad10-7a0d57e3e6f1 is in state STARTED 2026-04-05 01:01:23.505742 | orchestrator | 2026-04-05 01:01:23 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:01:26.561232 | orchestrator | 2026-04-05 01:01:26 | INFO  | Task daeb651d-59f5-4d09-a822-e33f38a3d3e8 is in state STARTED 2026-04-05 01:01:26.566218 | orchestrator | 2026-04-05 01:01:26 | INFO  | Task 9cd21d21-1963-4117-bb55-f226dfb9e1a9 is in state STARTED 2026-04-05 01:01:26.568742 | orchestrator | 2026-04-05 01:01:26 | INFO  | Task 8f7fa720-c7cb-4f2e-ad10-7a0d57e3e6f1 is in state STARTED 2026-04-05 01:01:26.568813 | orchestrator | 2026-04-05 01:01:26 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:01:29.621714 | orchestrator | 2026-04-05 01:01:29 | INFO  | Task daeb651d-59f5-4d09-a822-e33f38a3d3e8 is in state STARTED 2026-04-05 01:01:29.625122 | orchestrator | 2026-04-05 01:01:29 | INFO  | Task 9cd21d21-1963-4117-bb55-f226dfb9e1a9 is in state STARTED 2026-04-05 01:01:29.627630 | orchestrator | 2026-04-05 01:01:29 | INFO  | Task 8f7fa720-c7cb-4f2e-ad10-7a0d57e3e6f1 is in state STARTED 2026-04-05 01:01:29.627694 | orchestrator | 2026-04-05 01:01:29 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:01:32.665053 | orchestrator | 2026-04-05 01:01:32 | INFO  | Task daeb651d-59f5-4d09-a822-e33f38a3d3e8 is in state STARTED 2026-04-05 01:01:32.666095 | orchestrator | 2026-04-05 01:01:32 | INFO  | Task 9cd21d21-1963-4117-bb55-f226dfb9e1a9 is in state STARTED 2026-04-05 01:01:32.671255 | orchestrator | 2026-04-05 01:01:32.671295 | orchestrator | 2026-04-05 01:01:32.671304 | orchestrator | PLAY [Copy ceph keys to the configuration repository] ************************** 2026-04-05 01:01:32.671351 | orchestrator | 2026-04-05 01:01:32.671359 | orchestrator | TASK [Check if ceph keys exist] ************************************************ 2026-04-05 01:01:32.671382 | orchestrator | Sunday 05 April 2026 01:00:15 +0000 (0:00:00.275) 0:00:00.275 ********** 2026-04-05 01:01:32.671388 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.admin.keyring) 2026-04-05 01:01:32.671438 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-04-05 01:01:32.671447 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-04-05 01:01:32.671454 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder-backup.keyring) 2026-04-05 01:01:32.671461 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-04-05 01:01:32.671468 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.nova.keyring) 2026-04-05 01:01:32.671503 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.glance.keyring) 2026-04-05 01:01:32.671510 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.gnocchi.keyring) 2026-04-05 01:01:32.671517 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.manila.keyring) 2026-04-05 01:01:32.671524 | orchestrator | 2026-04-05 01:01:32.671531 | orchestrator | TASK [Fetch all ceph keys] ***************************************************** 2026-04-05 01:01:32.671671 | orchestrator | Sunday 05 April 2026 01:00:20 +0000 (0:00:05.144) 0:00:05.420 ********** 2026-04-05 01:01:32.671679 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.admin.keyring) 2026-04-05 01:01:32.671686 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-04-05 01:01:32.671693 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-04-05 01:01:32.671700 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder-backup.keyring) 2026-04-05 01:01:32.671707 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-04-05 01:01:32.671714 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.nova.keyring) 2026-04-05 01:01:32.671721 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.glance.keyring) 2026-04-05 01:01:32.671728 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.gnocchi.keyring) 2026-04-05 01:01:32.671734 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.manila.keyring) 2026-04-05 01:01:32.671741 | orchestrator | 2026-04-05 01:01:32.671748 | orchestrator | TASK [Create share directory] ************************************************** 2026-04-05 01:01:32.671755 | orchestrator | Sunday 05 April 2026 01:00:25 +0000 (0:00:04.456) 0:00:09.877 ********** 2026-04-05 01:01:32.671781 | orchestrator | changed: [testbed-manager -> localhost] 2026-04-05 01:01:32.671788 | orchestrator | 2026-04-05 01:01:32.671795 | orchestrator | TASK [Write ceph keys to the share directory] ********************************** 2026-04-05 01:01:32.671802 | orchestrator | Sunday 05 April 2026 01:00:26 +0000 (0:00:00.995) 0:00:10.873 ********** 2026-04-05 01:01:32.671808 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.admin.keyring) 2026-04-05 01:01:32.671815 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2026-04-05 01:01:32.671822 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2026-04-05 01:01:32.671829 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.cinder-backup.keyring) 2026-04-05 01:01:32.671836 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2026-04-05 01:01:32.671843 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.nova.keyring) 2026-04-05 01:01:32.671849 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.glance.keyring) 2026-04-05 01:01:32.671856 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.gnocchi.keyring) 2026-04-05 01:01:32.671863 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.manila.keyring) 2026-04-05 01:01:32.671870 | orchestrator | 2026-04-05 01:01:32.671877 | orchestrator | TASK [Check if target directories exist] *************************************** 2026-04-05 01:01:32.671884 | orchestrator | Sunday 05 April 2026 01:00:40 +0000 (0:00:14.544) 0:00:25.417 ********** 2026-04-05 01:01:32.671890 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/infrastructure/files/ceph) 2026-04-05 01:01:32.671897 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/cinder/cinder-volume) 2026-04-05 01:01:32.671904 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/cinder/cinder-backup) 2026-04-05 01:01:32.671911 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/cinder/cinder-backup) 2026-04-05 01:01:32.671957 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/nova) 2026-04-05 01:01:32.671964 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/nova) 2026-04-05 01:01:32.671971 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/glance) 2026-04-05 01:01:32.671978 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/gnocchi) 2026-04-05 01:01:32.671985 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/manila) 2026-04-05 01:01:32.671992 | orchestrator | 2026-04-05 01:01:32.671999 | orchestrator | TASK [Write ceph keys to the configuration directory] ************************** 2026-04-05 01:01:32.672007 | orchestrator | Sunday 05 April 2026 01:00:44 +0000 (0:00:04.118) 0:00:29.536 ********** 2026-04-05 01:01:32.672014 | orchestrator | changed: [testbed-manager] => (item=ceph.client.admin.keyring) 2026-04-05 01:01:32.672021 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2026-04-05 01:01:32.672028 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2026-04-05 01:01:32.672035 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder-backup.keyring) 2026-04-05 01:01:32.672042 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2026-04-05 01:01:32.672049 | orchestrator | changed: [testbed-manager] => (item=ceph.client.nova.keyring) 2026-04-05 01:01:32.672056 | orchestrator | changed: [testbed-manager] => (item=ceph.client.glance.keyring) 2026-04-05 01:01:32.672063 | orchestrator | changed: [testbed-manager] => (item=ceph.client.gnocchi.keyring) 2026-04-05 01:01:32.672070 | orchestrator | changed: [testbed-manager] => (item=ceph.client.manila.keyring) 2026-04-05 01:01:32.672082 | orchestrator | 2026-04-05 01:01:32.672089 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-05 01:01:32.672096 | orchestrator | testbed-manager : ok=6  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-05 01:01:32.672104 | orchestrator | 2026-04-05 01:01:32.672111 | orchestrator | 2026-04-05 01:01:32.672118 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-05 01:01:32.672125 | orchestrator | Sunday 05 April 2026 01:00:52 +0000 (0:00:07.398) 0:00:36.935 ********** 2026-04-05 01:01:32.672132 | orchestrator | =============================================================================== 2026-04-05 01:01:32.672139 | orchestrator | Write ceph keys to the share directory --------------------------------- 14.54s 2026-04-05 01:01:32.672145 | orchestrator | Write ceph keys to the configuration directory -------------------------- 7.40s 2026-04-05 01:01:32.672152 | orchestrator | Check if ceph keys exist ------------------------------------------------ 5.14s 2026-04-05 01:01:32.672159 | orchestrator | Fetch all ceph keys ----------------------------------------------------- 4.46s 2026-04-05 01:01:32.672166 | orchestrator | Check if target directories exist --------------------------------------- 4.12s 2026-04-05 01:01:32.672173 | orchestrator | Create share directory -------------------------------------------------- 1.00s 2026-04-05 01:01:32.672180 | orchestrator | 2026-04-05 01:01:32.672186 | orchestrator | 2026-04-05 01:01:32.672193 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-05 01:01:32.672200 | orchestrator | 2026-04-05 01:01:32.672207 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-05 01:01:32.672214 | orchestrator | Sunday 05 April 2026 00:59:47 +0000 (0:00:00.338) 0:00:00.338 ********** 2026-04-05 01:01:32.672220 | orchestrator | ok: [testbed-node-0] 2026-04-05 01:01:32.672228 | orchestrator | ok: [testbed-node-1] 2026-04-05 01:01:32.672234 | orchestrator | ok: [testbed-node-2] 2026-04-05 01:01:32.672239 | orchestrator | 2026-04-05 01:01:32.672319 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-05 01:01:32.672334 | orchestrator | Sunday 05 April 2026 00:59:47 +0000 (0:00:00.292) 0:00:00.631 ********** 2026-04-05 01:01:32.672344 | orchestrator | ok: [testbed-node-0] => (item=enable_horizon_True) 2026-04-05 01:01:32.672355 | orchestrator | ok: [testbed-node-1] => (item=enable_horizon_True) 2026-04-05 01:01:32.672365 | orchestrator | ok: [testbed-node-2] => (item=enable_horizon_True) 2026-04-05 01:01:32.672375 | orchestrator | 2026-04-05 01:01:32.672385 | orchestrator | PLAY [Apply role horizon] ****************************************************** 2026-04-05 01:01:32.672394 | orchestrator | 2026-04-05 01:01:32.672404 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-04-05 01:01:32.672414 | orchestrator | Sunday 05 April 2026 00:59:47 +0000 (0:00:00.307) 0:00:00.938 ********** 2026-04-05 01:01:32.672424 | orchestrator | included: /ansible/roles/horizon/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 01:01:32.672434 | orchestrator | 2026-04-05 01:01:32.672444 | orchestrator | TASK [horizon : Ensuring config directories exist] ***************************** 2026-04-05 01:01:32.672454 | orchestrator | Sunday 05 April 2026 00:59:48 +0000 (0:00:00.673) 0:00:01.612 ********** 2026-04-05 01:01:32.672485 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-04-05 01:01:32.672505 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-04-05 01:01:32.672528 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-04-05 01:01:32.672544 | orchestrator | 2026-04-05 01:01:32.672554 | orchestrator | TASK [horizon : Set empty custom policy] *************************************** 2026-04-05 01:01:32.672564 | orchestrator | Sunday 05 April 2026 00:59:49 +0000 (0:00:01.595) 0:00:03.207 ********** 2026-04-05 01:01:32.672573 | orchestrator | ok: [testbed-node-0] 2026-04-05 01:01:32.672583 | orchestrator | ok: [testbed-node-1] 2026-04-05 01:01:32.672594 | orchestrator | ok: [testbed-node-2] 2026-04-05 01:01:32.672603 | orchestrator | 2026-04-05 01:01:32.672613 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-04-05 01:01:32.672623 | orchestrator | Sunday 05 April 2026 00:59:50 +0000 (0:00:00.323) 0:00:03.531 ********** 2026-04-05 01:01:32.672632 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'cloudkitty', 'enabled': False})  2026-04-05 01:01:32.672642 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'heat', 'enabled': 'no'})  2026-04-05 01:01:32.672652 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'ironic', 'enabled': False})  2026-04-05 01:01:32.672662 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'masakari', 'enabled': False})  2026-04-05 01:01:32.672672 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'mistral', 'enabled': False})  2026-04-05 01:01:32.672682 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'tacker', 'enabled': False})  2026-04-05 01:01:32.672688 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'trove', 'enabled': False})  2026-04-05 01:01:32.672695 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'watcher', 'enabled': False})  2026-04-05 01:01:32.672702 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'cloudkitty', 'enabled': False})  2026-04-05 01:01:32.672709 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'heat', 'enabled': 'no'})  2026-04-05 01:01:32.672716 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'ironic', 'enabled': False})  2026-04-05 01:01:32.672722 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'masakari', 'enabled': False})  2026-04-05 01:01:32.672729 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'mistral', 'enabled': False})  2026-04-05 01:01:32.672736 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'tacker', 'enabled': False})  2026-04-05 01:01:32.672743 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'trove', 'enabled': False})  2026-04-05 01:01:32.672750 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'watcher', 'enabled': False})  2026-04-05 01:01:32.672762 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'cloudkitty', 'enabled': False})  2026-04-05 01:01:32.672769 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'heat', 'enabled': 'no'})  2026-04-05 01:01:32.672776 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'ironic', 'enabled': False})  2026-04-05 01:01:32.672783 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'masakari', 'enabled': False})  2026-04-05 01:01:32.672790 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'mistral', 'enabled': False})  2026-04-05 01:01:32.672797 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'tacker', 'enabled': False})  2026-04-05 01:01:32.672807 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'trove', 'enabled': False})  2026-04-05 01:01:32.672814 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'watcher', 'enabled': False})  2026-04-05 01:01:32.672825 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'ceilometer', 'enabled': 'yes'}) 2026-04-05 01:01:32.672833 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'cinder', 'enabled': 'yes'}) 2026-04-05 01:01:32.672840 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'designate', 'enabled': True}) 2026-04-05 01:01:32.672847 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'glance', 'enabled': True}) 2026-04-05 01:01:32.672854 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'keystone', 'enabled': True}) 2026-04-05 01:01:32.672861 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'magnum', 'enabled': True}) 2026-04-05 01:01:32.672868 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'manila', 'enabled': True}) 2026-04-05 01:01:32.672874 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'neutron', 'enabled': True}) 2026-04-05 01:01:32.672881 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'nova', 'enabled': True}) 2026-04-05 01:01:32.672888 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'octavia', 'enabled': True}) 2026-04-05 01:01:32.672895 | orchestrator | 2026-04-05 01:01:32.672902 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-04-05 01:01:32.672908 | orchestrator | Sunday 05 April 2026 00:59:51 +0000 (0:00:00.876) 0:00:04.407 ********** 2026-04-05 01:01:32.672934 | orchestrator | ok: [testbed-node-0] 2026-04-05 01:01:32.672941 | orchestrator | ok: [testbed-node-1] 2026-04-05 01:01:32.672947 | orchestrator | ok: [testbed-node-2] 2026-04-05 01:01:32.672953 | orchestrator | 2026-04-05 01:01:32.672960 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-04-05 01:01:32.672967 | orchestrator | Sunday 05 April 2026 00:59:51 +0000 (0:00:00.503) 0:00:04.911 ********** 2026-04-05 01:01:32.672974 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:01:32.672981 | orchestrator | 2026-04-05 01:01:32.672988 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-04-05 01:01:32.672994 | orchestrator | Sunday 05 April 2026 00:59:51 +0000 (0:00:00.128) 0:00:05.040 ********** 2026-04-05 01:01:32.673001 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:01:32.673008 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:01:32.673015 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:01:32.673026 | orchestrator | 2026-04-05 01:01:32.673033 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-04-05 01:01:32.673040 | orchestrator | Sunday 05 April 2026 00:59:52 +0000 (0:00:00.271) 0:00:05.311 ********** 2026-04-05 01:01:32.673047 | orchestrator | ok: [testbed-node-0] 2026-04-05 01:01:32.673063 | orchestrator | ok: [testbed-node-1] 2026-04-05 01:01:32.673070 | orchestrator | ok: [testbed-node-2] 2026-04-05 01:01:32.673077 | orchestrator | 2026-04-05 01:01:32.673092 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-04-05 01:01:32.673099 | orchestrator | Sunday 05 April 2026 00:59:52 +0000 (0:00:00.297) 0:00:05.609 ********** 2026-04-05 01:01:32.673105 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:01:32.673112 | orchestrator | 2026-04-05 01:01:32.673119 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-04-05 01:01:32.673126 | orchestrator | Sunday 05 April 2026 00:59:52 +0000 (0:00:00.133) 0:00:05.743 ********** 2026-04-05 01:01:32.673133 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:01:32.673140 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:01:32.673146 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:01:32.673153 | orchestrator | 2026-04-05 01:01:32.673160 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-04-05 01:01:32.673167 | orchestrator | Sunday 05 April 2026 00:59:53 +0000 (0:00:00.531) 0:00:06.275 ********** 2026-04-05 01:01:32.673174 | orchestrator | ok: [testbed-node-0] 2026-04-05 01:01:32.673180 | orchestrator | ok: [testbed-node-1] 2026-04-05 01:01:32.673187 | orchestrator | ok: [testbed-node-2] 2026-04-05 01:01:32.673194 | orchestrator | 2026-04-05 01:01:32.673201 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-04-05 01:01:32.673208 | orchestrator | Sunday 05 April 2026 00:59:53 +0000 (0:00:00.290) 0:00:06.566 ********** 2026-04-05 01:01:32.673214 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:01:32.673221 | orchestrator | 2026-04-05 01:01:32.673228 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-04-05 01:01:32.673234 | orchestrator | Sunday 05 April 2026 00:59:53 +0000 (0:00:00.127) 0:00:06.693 ********** 2026-04-05 01:01:32.673240 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:01:32.673246 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:01:32.673252 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:01:32.673258 | orchestrator | 2026-04-05 01:01:32.673265 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-04-05 01:01:32.673275 | orchestrator | Sunday 05 April 2026 00:59:53 +0000 (0:00:00.286) 0:00:06.980 ********** 2026-04-05 01:01:32.673282 | orchestrator | ok: [testbed-node-0] 2026-04-05 01:01:32.673289 | orchestrator | ok: [testbed-node-1] 2026-04-05 01:01:32.673296 | orchestrator | ok: [testbed-node-2] 2026-04-05 01:01:32.673303 | orchestrator | 2026-04-05 01:01:32.673313 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-04-05 01:01:32.673320 | orchestrator | Sunday 05 April 2026 00:59:54 +0000 (0:00:00.295) 0:00:07.276 ********** 2026-04-05 01:01:32.673327 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:01:32.673334 | orchestrator | 2026-04-05 01:01:32.673341 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-04-05 01:01:32.673347 | orchestrator | Sunday 05 April 2026 00:59:54 +0000 (0:00:00.123) 0:00:07.399 ********** 2026-04-05 01:01:32.673354 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:01:32.673361 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:01:32.673368 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:01:32.673375 | orchestrator | 2026-04-05 01:01:32.673381 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-04-05 01:01:32.673388 | orchestrator | Sunday 05 April 2026 00:59:54 +0000 (0:00:00.501) 0:00:07.901 ********** 2026-04-05 01:01:32.673395 | orchestrator | ok: [testbed-node-0] 2026-04-05 01:01:32.673402 | orchestrator | ok: [testbed-node-1] 2026-04-05 01:01:32.673408 | orchestrator | ok: [testbed-node-2] 2026-04-05 01:01:32.673415 | orchestrator | 2026-04-05 01:01:32.673422 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-04-05 01:01:32.673434 | orchestrator | Sunday 05 April 2026 00:59:54 +0000 (0:00:00.309) 0:00:08.210 ********** 2026-04-05 01:01:32.673440 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:01:32.673447 | orchestrator | 2026-04-05 01:01:32.673454 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-04-05 01:01:32.673461 | orchestrator | Sunday 05 April 2026 00:59:55 +0000 (0:00:00.117) 0:00:08.328 ********** 2026-04-05 01:01:32.673468 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:01:32.673475 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:01:32.673481 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:01:32.673488 | orchestrator | 2026-04-05 01:01:32.673495 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-04-05 01:01:32.673502 | orchestrator | Sunday 05 April 2026 00:59:55 +0000 (0:00:00.263) 0:00:08.592 ********** 2026-04-05 01:01:32.673508 | orchestrator | ok: [testbed-node-0] 2026-04-05 01:01:32.673515 | orchestrator | ok: [testbed-node-1] 2026-04-05 01:01:32.673522 | orchestrator | ok: [testbed-node-2] 2026-04-05 01:01:32.673529 | orchestrator | 2026-04-05 01:01:32.673536 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-04-05 01:01:32.673543 | orchestrator | Sunday 05 April 2026 00:59:55 +0000 (0:00:00.528) 0:00:09.120 ********** 2026-04-05 01:01:32.673550 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:01:32.673556 | orchestrator | 2026-04-05 01:01:32.673563 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-04-05 01:01:32.673570 | orchestrator | Sunday 05 April 2026 00:59:55 +0000 (0:00:00.119) 0:00:09.240 ********** 2026-04-05 01:01:32.673577 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:01:32.673583 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:01:32.673590 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:01:32.673597 | orchestrator | 2026-04-05 01:01:32.673604 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-04-05 01:01:32.673610 | orchestrator | Sunday 05 April 2026 00:59:56 +0000 (0:00:00.360) 0:00:09.600 ********** 2026-04-05 01:01:32.673617 | orchestrator | ok: [testbed-node-0] 2026-04-05 01:01:32.673624 | orchestrator | ok: [testbed-node-1] 2026-04-05 01:01:32.673631 | orchestrator | ok: [testbed-node-2] 2026-04-05 01:01:32.673638 | orchestrator | 2026-04-05 01:01:32.673644 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-04-05 01:01:32.673651 | orchestrator | Sunday 05 April 2026 00:59:56 +0000 (0:00:00.313) 0:00:09.914 ********** 2026-04-05 01:01:32.673658 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:01:32.673665 | orchestrator | 2026-04-05 01:01:32.673672 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-04-05 01:01:32.673679 | orchestrator | Sunday 05 April 2026 00:59:56 +0000 (0:00:00.113) 0:00:10.027 ********** 2026-04-05 01:01:32.673685 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:01:32.673692 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:01:32.673699 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:01:32.673706 | orchestrator | 2026-04-05 01:01:32.673712 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-04-05 01:01:32.673719 | orchestrator | Sunday 05 April 2026 00:59:57 +0000 (0:00:00.277) 0:00:10.305 ********** 2026-04-05 01:01:32.673726 | orchestrator | ok: [testbed-node-0] 2026-04-05 01:01:32.673733 | orchestrator | ok: [testbed-node-1] 2026-04-05 01:01:32.673740 | orchestrator | ok: [testbed-node-2] 2026-04-05 01:01:32.673746 | orchestrator | 2026-04-05 01:01:32.673753 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-04-05 01:01:32.673760 | orchestrator | Sunday 05 April 2026 00:59:57 +0000 (0:00:00.500) 0:00:10.806 ********** 2026-04-05 01:01:32.673767 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:01:32.673774 | orchestrator | 2026-04-05 01:01:32.673781 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-04-05 01:01:32.673787 | orchestrator | Sunday 05 April 2026 00:59:57 +0000 (0:00:00.127) 0:00:10.934 ********** 2026-04-05 01:01:32.673809 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:01:32.673816 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:01:32.673823 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:01:32.673830 | orchestrator | 2026-04-05 01:01:32.673836 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-04-05 01:01:32.673843 | orchestrator | Sunday 05 April 2026 00:59:58 +0000 (0:00:00.336) 0:00:11.270 ********** 2026-04-05 01:01:32.673850 | orchestrator | ok: [testbed-node-0] 2026-04-05 01:01:32.673857 | orchestrator | ok: [testbed-node-1] 2026-04-05 01:01:32.673864 | orchestrator | ok: [testbed-node-2] 2026-04-05 01:01:32.673870 | orchestrator | 2026-04-05 01:01:32.673877 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-04-05 01:01:32.673884 | orchestrator | Sunday 05 April 2026 00:59:58 +0000 (0:00:00.329) 0:00:11.600 ********** 2026-04-05 01:01:32.673891 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:01:32.673898 | orchestrator | 2026-04-05 01:01:32.673908 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-04-05 01:01:32.673925 | orchestrator | Sunday 05 April 2026 00:59:58 +0000 (0:00:00.150) 0:00:11.751 ********** 2026-04-05 01:01:32.673932 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:01:32.673939 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:01:32.673949 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:01:32.673955 | orchestrator | 2026-04-05 01:01:32.673962 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-04-05 01:01:32.673969 | orchestrator | Sunday 05 April 2026 00:59:58 +0000 (0:00:00.307) 0:00:12.058 ********** 2026-04-05 01:01:32.673976 | orchestrator | ok: [testbed-node-0] 2026-04-05 01:01:32.673983 | orchestrator | ok: [testbed-node-1] 2026-04-05 01:01:32.673990 | orchestrator | ok: [testbed-node-2] 2026-04-05 01:01:32.673997 | orchestrator | 2026-04-05 01:01:32.674003 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-04-05 01:01:32.674010 | orchestrator | Sunday 05 April 2026 00:59:59 +0000 (0:00:00.571) 0:00:12.630 ********** 2026-04-05 01:01:32.674062 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:01:32.674069 | orchestrator | 2026-04-05 01:01:32.674076 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-04-05 01:01:32.674083 | orchestrator | Sunday 05 April 2026 00:59:59 +0000 (0:00:00.149) 0:00:12.780 ********** 2026-04-05 01:01:32.674090 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:01:32.674097 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:01:32.674104 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:01:32.674110 | orchestrator | 2026-04-05 01:01:32.674117 | orchestrator | TASK [horizon : Copying over config.json files for services] ******************* 2026-04-05 01:01:32.674124 | orchestrator | Sunday 05 April 2026 00:59:59 +0000 (0:00:00.310) 0:00:13.091 ********** 2026-04-05 01:01:32.674131 | orchestrator | changed: [testbed-node-1] 2026-04-05 01:01:32.674137 | orchestrator | changed: [testbed-node-0] 2026-04-05 01:01:32.674144 | orchestrator | changed: [testbed-node-2] 2026-04-05 01:01:32.674151 | orchestrator | 2026-04-05 01:01:32.674158 | orchestrator | TASK [horizon : Copying over horizon.conf] ************************************* 2026-04-05 01:01:32.674164 | orchestrator | Sunday 05 April 2026 01:00:01 +0000 (0:00:01.933) 0:00:15.024 ********** 2026-04-05 01:01:32.674171 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2026-04-05 01:01:32.674178 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2026-04-05 01:01:32.674185 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2026-04-05 01:01:32.674192 | orchestrator | 2026-04-05 01:01:32.674199 | orchestrator | TASK [horizon : Copying over kolla-settings.py] ******************************** 2026-04-05 01:01:32.674205 | orchestrator | Sunday 05 April 2026 01:00:04 +0000 (0:00:02.790) 0:00:17.814 ********** 2026-04-05 01:01:32.674212 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2026-04-05 01:01:32.674223 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2026-04-05 01:01:32.674230 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2026-04-05 01:01:32.674236 | orchestrator | 2026-04-05 01:01:32.674242 | orchestrator | TASK [horizon : Copying over custom-settings.py] ******************************* 2026-04-05 01:01:32.674249 | orchestrator | Sunday 05 April 2026 01:00:06 +0000 (0:00:02.109) 0:00:19.923 ********** 2026-04-05 01:01:32.674256 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2026-04-05 01:01:32.674263 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2026-04-05 01:01:32.674269 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2026-04-05 01:01:32.674276 | orchestrator | 2026-04-05 01:01:32.674283 | orchestrator | TASK [horizon : Copying over existing policy file] ***************************** 2026-04-05 01:01:32.674290 | orchestrator | Sunday 05 April 2026 01:00:08 +0000 (0:00:01.762) 0:00:21.686 ********** 2026-04-05 01:01:32.674296 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:01:32.674303 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:01:32.674310 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:01:32.674317 | orchestrator | 2026-04-05 01:01:32.674324 | orchestrator | TASK [horizon : Copying over custom themes] ************************************ 2026-04-05 01:01:32.674330 | orchestrator | Sunday 05 April 2026 01:00:08 +0000 (0:00:00.301) 0:00:21.987 ********** 2026-04-05 01:01:32.674337 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:01:32.674344 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:01:32.674351 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:01:32.674358 | orchestrator | 2026-04-05 01:01:32.674364 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-04-05 01:01:32.674371 | orchestrator | Sunday 05 April 2026 01:00:09 +0000 (0:00:00.259) 0:00:22.247 ********** 2026-04-05 01:01:32.674378 | orchestrator | included: /ansible/roles/horizon/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 01:01:32.674385 | orchestrator | 2026-04-05 01:01:32.674392 | orchestrator | TASK [service-cert-copy : horizon | Copying over extra CA certificates] ******** 2026-04-05 01:01:32.674399 | orchestrator | Sunday 05 April 2026 01:00:09 +0000 (0:00:00.659) 0:00:22.906 ********** 2026-04-05 01:01:32.674416 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-04-05 01:01:32.674431 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-04-05 01:01:32.674444 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BL2026-04-05 01:01:32 | INFO  | Task 8f7fa720-c7cb-4f2e-ad10-7a0d57e3e6f1 is in state SUCCESS 2026-04-05 01:01:32.674452 | orchestrator | 2026-04-05 01:01:32 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:01:32.674462 | orchestrator | AZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-04-05 01:01:32.674474 | orchestrator | 2026-04-05 01:01:32.674481 | orchestrator | TASK [service-cert-copy : horizon | Copying over backend internal TLS certificate] *** 2026-04-05 01:01:32.674488 | orchestrator | Sunday 05 April 2026 01:00:11 +0000 (0:00:01.399) 0:00:24.305 ********** 2026-04-05 01:01:32.674504 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-04-05 01:01:32.674511 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:01:32.674519 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-04-05 01:01:32.674530 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:01:32.674545 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-04-05 01:01:32.674553 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:01:32.674560 | orchestrator | 2026-04-05 01:01:32.674567 | orchestrator | TASK [service-cert-copy : horizon | Copying over backend internal TLS key] ***** 2026-04-05 01:01:32.674578 | orchestrator | Sunday 05 April 2026 01:00:11 +0000 (0:00:00.706) 0:00:25.012 ********** 2026-04-05 01:01:32.674585 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-04-05 01:01:32.674593 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:01:32.674607 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-04-05 01:01:32.674620 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:01:32.674627 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-04-05 01:01:32.674634 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:01:32.674641 | orchestrator | 2026-04-05 01:01:32.674648 | orchestrator | TASK [horizon : Deploy horizon container] ************************************** 2026-04-05 01:01:32.674655 | orchestrator | Sunday 05 April 2026 01:00:12 +0000 (0:00:00.973) 0:00:25.985 ********** 2026-04-05 01:01:32.674670 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-04-05 01:01:32.674686 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-04-05 01:01:32.674702 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-04-05 01:01:32.674714 | orchestrator | 2026-04-05 01:01:32.674721 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-04-05 01:01:32.674727 | orchestrator | Sunday 05 April 2026 01:00:14 +0000 (0:00:01.482) 0:00:27.468 ********** 2026-04-05 01:01:32.674734 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:01:32.674741 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:01:32.674748 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:01:32.674755 | orchestrator | 2026-04-05 01:01:32.674762 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-04-05 01:01:32.674769 | orchestrator | Sunday 05 April 2026 01:00:14 +0000 (0:00:00.453) 0:00:27.921 ********** 2026-04-05 01:01:32.674776 | orchestrator | included: /ansible/roles/horizon/tasks/bootstrap.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 01:01:32.674783 | orchestrator | 2026-04-05 01:01:32.674789 | orchestrator | TASK [horizon : Creating Horizon database] ************************************* 2026-04-05 01:01:32.674796 | orchestrator | Sunday 05 April 2026 01:00:15 +0000 (0:00:00.749) 0:00:28.671 ********** 2026-04-05 01:01:32.674803 | orchestrator | changed: [testbed-node-0] 2026-04-05 01:01:32.674810 | orchestrator | 2026-04-05 01:01:32.674817 | orchestrator | TASK [horizon : Creating Horizon database user and setting permissions] ******** 2026-04-05 01:01:32.674823 | orchestrator | Sunday 05 April 2026 01:00:17 +0000 (0:00:02.535) 0:00:31.207 ********** 2026-04-05 01:01:32.674830 | orchestrator | changed: [testbed-node-0] 2026-04-05 01:01:32.674837 | orchestrator | 2026-04-05 01:01:32.674844 | orchestrator | TASK [horizon : Running Horizon bootstrap container] *************************** 2026-04-05 01:01:32.674851 | orchestrator | Sunday 05 April 2026 01:00:20 +0000 (0:00:02.480) 0:00:33.688 ********** 2026-04-05 01:01:32.674857 | orchestrator | changed: [testbed-node-0] 2026-04-05 01:01:32.674864 | orchestrator | 2026-04-05 01:01:32.674871 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2026-04-05 01:01:32.674878 | orchestrator | Sunday 05 April 2026 01:00:37 +0000 (0:00:17.297) 0:00:50.985 ********** 2026-04-05 01:01:32.674884 | orchestrator | 2026-04-05 01:01:32.674891 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2026-04-05 01:01:32.674898 | orchestrator | Sunday 05 April 2026 01:00:37 +0000 (0:00:00.084) 0:00:51.070 ********** 2026-04-05 01:01:32.674905 | orchestrator | 2026-04-05 01:01:32.674911 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2026-04-05 01:01:32.674954 | orchestrator | Sunday 05 April 2026 01:00:37 +0000 (0:00:00.066) 0:00:51.136 ********** 2026-04-05 01:01:32.674961 | orchestrator | 2026-04-05 01:01:32.674968 | orchestrator | RUNNING HANDLER [horizon : Restart horizon container] ************************** 2026-04-05 01:01:32.674979 | orchestrator | Sunday 05 April 2026 01:00:37 +0000 (0:00:00.070) 0:00:51.206 ********** 2026-04-05 01:01:32.674986 | orchestrator | changed: [testbed-node-0] 2026-04-05 01:01:32.674993 | orchestrator | changed: [testbed-node-1] 2026-04-05 01:01:32.675000 | orchestrator | changed: [testbed-node-2] 2026-04-05 01:01:32.675006 | orchestrator | 2026-04-05 01:01:32.675013 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-05 01:01:32.675020 | orchestrator | testbed-node-0 : ok=37  changed=11  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2026-04-05 01:01:32.675030 | orchestrator | testbed-node-1 : ok=34  changed=8  unreachable=0 failed=0 skipped=15  rescued=0 ignored=0 2026-04-05 01:01:32.675037 | orchestrator | testbed-node-2 : ok=34  changed=8  unreachable=0 failed=0 skipped=15  rescued=0 ignored=0 2026-04-05 01:01:32.675044 | orchestrator | 2026-04-05 01:01:32.675051 | orchestrator | 2026-04-05 01:01:32.675061 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-05 01:01:32.675067 | orchestrator | Sunday 05 April 2026 01:01:29 +0000 (0:00:51.776) 0:01:42.983 ********** 2026-04-05 01:01:32.675074 | orchestrator | =============================================================================== 2026-04-05 01:01:32.675081 | orchestrator | horizon : Restart horizon container ------------------------------------ 51.78s 2026-04-05 01:01:32.675088 | orchestrator | horizon : Running Horizon bootstrap container -------------------------- 17.30s 2026-04-05 01:01:32.675094 | orchestrator | horizon : Copying over horizon.conf ------------------------------------- 2.79s 2026-04-05 01:01:32.675101 | orchestrator | horizon : Creating Horizon database ------------------------------------- 2.54s 2026-04-05 01:01:32.675108 | orchestrator | horizon : Creating Horizon database user and setting permissions -------- 2.48s 2026-04-05 01:01:32.675115 | orchestrator | horizon : Copying over kolla-settings.py -------------------------------- 2.11s 2026-04-05 01:01:32.675122 | orchestrator | horizon : Copying over config.json files for services ------------------- 1.93s 2026-04-05 01:01:32.675129 | orchestrator | horizon : Copying over custom-settings.py ------------------------------- 1.76s 2026-04-05 01:01:32.675135 | orchestrator | horizon : Ensuring config directories exist ----------------------------- 1.60s 2026-04-05 01:01:32.675142 | orchestrator | horizon : Deploy horizon container -------------------------------------- 1.49s 2026-04-05 01:01:32.675149 | orchestrator | service-cert-copy : horizon | Copying over extra CA certificates -------- 1.40s 2026-04-05 01:01:32.675156 | orchestrator | service-cert-copy : horizon | Copying over backend internal TLS key ----- 0.97s 2026-04-05 01:01:32.675163 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.88s 2026-04-05 01:01:32.675170 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.75s 2026-04-05 01:01:32.675176 | orchestrator | service-cert-copy : horizon | Copying over backend internal TLS certificate --- 0.71s 2026-04-05 01:01:32.675183 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.67s 2026-04-05 01:01:32.675190 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.66s 2026-04-05 01:01:32.675197 | orchestrator | horizon : Update policy file name --------------------------------------- 0.57s 2026-04-05 01:01:32.675204 | orchestrator | horizon : Update custom policy file name -------------------------------- 0.53s 2026-04-05 01:01:32.675211 | orchestrator | horizon : Update policy file name --------------------------------------- 0.53s 2026-04-05 01:01:35.718387 | orchestrator | 2026-04-05 01:01:35 | INFO  | Task daeb651d-59f5-4d09-a822-e33f38a3d3e8 is in state STARTED 2026-04-05 01:01:35.720040 | orchestrator | 2026-04-05 01:01:35 | INFO  | Task 9cd21d21-1963-4117-bb55-f226dfb9e1a9 is in state STARTED 2026-04-05 01:01:35.720075 | orchestrator | 2026-04-05 01:01:35 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:01:38.767045 | orchestrator | 2026-04-05 01:01:38 | INFO  | Task daeb651d-59f5-4d09-a822-e33f38a3d3e8 is in state STARTED 2026-04-05 01:01:38.769768 | orchestrator | 2026-04-05 01:01:38 | INFO  | Task 9cd21d21-1963-4117-bb55-f226dfb9e1a9 is in state STARTED 2026-04-05 01:01:38.769796 | orchestrator | 2026-04-05 01:01:38 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:01:41.826615 | orchestrator | 2026-04-05 01:01:41 | INFO  | Task daeb651d-59f5-4d09-a822-e33f38a3d3e8 is in state STARTED 2026-04-05 01:01:41.827995 | orchestrator | 2026-04-05 01:01:41 | INFO  | Task 9cd21d21-1963-4117-bb55-f226dfb9e1a9 is in state STARTED 2026-04-05 01:01:41.828025 | orchestrator | 2026-04-05 01:01:41 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:01:44.880691 | orchestrator | 2026-04-05 01:01:44 | INFO  | Task daeb651d-59f5-4d09-a822-e33f38a3d3e8 is in state STARTED 2026-04-05 01:01:44.883040 | orchestrator | 2026-04-05 01:01:44 | INFO  | Task 9cd21d21-1963-4117-bb55-f226dfb9e1a9 is in state STARTED 2026-04-05 01:01:44.883092 | orchestrator | 2026-04-05 01:01:44 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:01:47.924370 | orchestrator | 2026-04-05 01:01:47 | INFO  | Task daeb651d-59f5-4d09-a822-e33f38a3d3e8 is in state STARTED 2026-04-05 01:01:47.926383 | orchestrator | 2026-04-05 01:01:47 | INFO  | Task 9cd21d21-1963-4117-bb55-f226dfb9e1a9 is in state STARTED 2026-04-05 01:01:47.926564 | orchestrator | 2026-04-05 01:01:47 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:01:50.984394 | orchestrator | 2026-04-05 01:01:50 | INFO  | Task daeb651d-59f5-4d09-a822-e33f38a3d3e8 is in state STARTED 2026-04-05 01:01:50.985099 | orchestrator | 2026-04-05 01:01:50 | INFO  | Task 9cd21d21-1963-4117-bb55-f226dfb9e1a9 is in state STARTED 2026-04-05 01:01:50.985134 | orchestrator | 2026-04-05 01:01:50 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:01:54.035235 | orchestrator | 2026-04-05 01:01:54 | INFO  | Task daeb651d-59f5-4d09-a822-e33f38a3d3e8 is in state STARTED 2026-04-05 01:01:54.036127 | orchestrator | 2026-04-05 01:01:54 | INFO  | Task 9cd21d21-1963-4117-bb55-f226dfb9e1a9 is in state STARTED 2026-04-05 01:01:54.037946 | orchestrator | 2026-04-05 01:01:54 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:01:57.088179 | orchestrator | 2026-04-05 01:01:57 | INFO  | Task daeb651d-59f5-4d09-a822-e33f38a3d3e8 is in state STARTED 2026-04-05 01:01:57.089254 | orchestrator | 2026-04-05 01:01:57 | INFO  | Task 9cd21d21-1963-4117-bb55-f226dfb9e1a9 is in state SUCCESS 2026-04-05 01:01:57.089300 | orchestrator | 2026-04-05 01:01:57 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:02:00.146874 | orchestrator | 2026-04-05 01:02:00 | INFO  | Task ed55e999-0d10-4489-891f-422b9f86068d is in state STARTED 2026-04-05 01:02:00.149374 | orchestrator | 2026-04-05 01:02:00 | INFO  | Task daeb651d-59f5-4d09-a822-e33f38a3d3e8 is in state STARTED 2026-04-05 01:02:00.151038 | orchestrator | 2026-04-05 01:02:00 | INFO  | Task 79a2f1f3-5b96-47a3-8ba3-1190a02e9ced is in state STARTED 2026-04-05 01:02:00.152578 | orchestrator | 2026-04-05 01:02:00 | INFO  | Task 6f60dc2d-2965-45bf-b1d8-de1c02fd2a41 is in state STARTED 2026-04-05 01:02:00.152620 | orchestrator | 2026-04-05 01:02:00 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:02:03.200248 | orchestrator | 2026-04-05 01:02:03 | INFO  | Task ed55e999-0d10-4489-891f-422b9f86068d is in state STARTED 2026-04-05 01:02:03.200344 | orchestrator | 2026-04-05 01:02:03 | INFO  | Task daeb651d-59f5-4d09-a822-e33f38a3d3e8 is in state STARTED 2026-04-05 01:02:03.200358 | orchestrator | 2026-04-05 01:02:03 | INFO  | Task 8697bf9a-40eb-4fee-b8ee-ab5e6657c709 is in state STARTED 2026-04-05 01:02:03.201701 | orchestrator | 2026-04-05 01:02:03 | INFO  | Task 79a2f1f3-5b96-47a3-8ba3-1190a02e9ced is in state STARTED 2026-04-05 01:02:03.201741 | orchestrator | 2026-04-05 01:02:03 | INFO  | Task 6f60dc2d-2965-45bf-b1d8-de1c02fd2a41 is in state SUCCESS 2026-04-05 01:02:03.202154 | orchestrator | 2026-04-05 01:02:03 | INFO  | Task 4a2602ed-76cc-4a67-94e6-216d83a097ad is in state STARTED 2026-04-05 01:02:03.202192 | orchestrator | 2026-04-05 01:02:03 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:02:06.558386 | orchestrator | 2026-04-05 01:02:06 | INFO  | Task ed55e999-0d10-4489-891f-422b9f86068d is in state STARTED 2026-04-05 01:02:06.558498 | orchestrator | 2026-04-05 01:02:06 | INFO  | Task daeb651d-59f5-4d09-a822-e33f38a3d3e8 is in state STARTED 2026-04-05 01:02:06.559729 | orchestrator | 2026-04-05 01:02:06 | INFO  | Task 8697bf9a-40eb-4fee-b8ee-ab5e6657c709 is in state STARTED 2026-04-05 01:02:06.560461 | orchestrator | 2026-04-05 01:02:06 | INFO  | Task 79a2f1f3-5b96-47a3-8ba3-1190a02e9ced is in state STARTED 2026-04-05 01:02:06.561485 | orchestrator | 2026-04-05 01:02:06 | INFO  | Task 4a2602ed-76cc-4a67-94e6-216d83a097ad is in state STARTED 2026-04-05 01:02:06.561519 | orchestrator | 2026-04-05 01:02:06 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:02:09.605126 | orchestrator | 2026-04-05 01:02:09 | INFO  | Task ed55e999-0d10-4489-891f-422b9f86068d is in state STARTED 2026-04-05 01:02:09.605486 | orchestrator | 2026-04-05 01:02:09 | INFO  | Task daeb651d-59f5-4d09-a822-e33f38a3d3e8 is in state STARTED 2026-04-05 01:02:09.606326 | orchestrator | 2026-04-05 01:02:09 | INFO  | Task 8697bf9a-40eb-4fee-b8ee-ab5e6657c709 is in state STARTED 2026-04-05 01:02:09.607211 | orchestrator | 2026-04-05 01:02:09 | INFO  | Task 79a2f1f3-5b96-47a3-8ba3-1190a02e9ced is in state STARTED 2026-04-05 01:02:09.607946 | orchestrator | 2026-04-05 01:02:09 | INFO  | Task 4a2602ed-76cc-4a67-94e6-216d83a097ad is in state STARTED 2026-04-05 01:02:09.608779 | orchestrator | 2026-04-05 01:02:09 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:02:12.663437 | orchestrator | 2026-04-05 01:02:12 | INFO  | Task ed55e999-0d10-4489-891f-422b9f86068d is in state STARTED 2026-04-05 01:02:12.667351 | orchestrator | 2026-04-05 01:02:12 | INFO  | Task daeb651d-59f5-4d09-a822-e33f38a3d3e8 is in state STARTED 2026-04-05 01:02:12.672292 | orchestrator | 2026-04-05 01:02:12 | INFO  | Task 8697bf9a-40eb-4fee-b8ee-ab5e6657c709 is in state STARTED 2026-04-05 01:02:12.675422 | orchestrator | 2026-04-05 01:02:12 | INFO  | Task 79a2f1f3-5b96-47a3-8ba3-1190a02e9ced is in state STARTED 2026-04-05 01:02:12.677103 | orchestrator | 2026-04-05 01:02:12 | INFO  | Task 4a2602ed-76cc-4a67-94e6-216d83a097ad is in state STARTED 2026-04-05 01:02:12.677146 | orchestrator | 2026-04-05 01:02:12 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:02:15.721839 | orchestrator | 2026-04-05 01:02:15 | INFO  | Task ed55e999-0d10-4489-891f-422b9f86068d is in state STARTED 2026-04-05 01:02:15.724167 | orchestrator | 2026-04-05 01:02:15 | INFO  | Task daeb651d-59f5-4d09-a822-e33f38a3d3e8 is in state STARTED 2026-04-05 01:02:15.726173 | orchestrator | 2026-04-05 01:02:15 | INFO  | Task 8697bf9a-40eb-4fee-b8ee-ab5e6657c709 is in state STARTED 2026-04-05 01:02:15.727864 | orchestrator | 2026-04-05 01:02:15 | INFO  | Task 79a2f1f3-5b96-47a3-8ba3-1190a02e9ced is in state STARTED 2026-04-05 01:02:15.730919 | orchestrator | 2026-04-05 01:02:15 | INFO  | Task 4a2602ed-76cc-4a67-94e6-216d83a097ad is in state STARTED 2026-04-05 01:02:15.731432 | orchestrator | 2026-04-05 01:02:15 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:02:18.766760 | orchestrator | 2026-04-05 01:02:18 | INFO  | Task ed55e999-0d10-4489-891f-422b9f86068d is in state STARTED 2026-04-05 01:02:18.768983 | orchestrator | 2026-04-05 01:02:18 | INFO  | Task daeb651d-59f5-4d09-a822-e33f38a3d3e8 is in state STARTED 2026-04-05 01:02:18.772038 | orchestrator | 2026-04-05 01:02:18 | INFO  | Task 8697bf9a-40eb-4fee-b8ee-ab5e6657c709 is in state STARTED 2026-04-05 01:02:18.773289 | orchestrator | 2026-04-05 01:02:18 | INFO  | Task 79a2f1f3-5b96-47a3-8ba3-1190a02e9ced is in state STARTED 2026-04-05 01:02:18.775224 | orchestrator | 2026-04-05 01:02:18 | INFO  | Task 4a2602ed-76cc-4a67-94e6-216d83a097ad is in state STARTED 2026-04-05 01:02:18.775272 | orchestrator | 2026-04-05 01:02:18 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:02:21.815858 | orchestrator | 2026-04-05 01:02:21 | INFO  | Task ed55e999-0d10-4489-891f-422b9f86068d is in state STARTED 2026-04-05 01:02:21.816775 | orchestrator | 2026-04-05 01:02:21 | INFO  | Task daeb651d-59f5-4d09-a822-e33f38a3d3e8 is in state STARTED 2026-04-05 01:02:21.819509 | orchestrator | 2026-04-05 01:02:21 | INFO  | Task 8697bf9a-40eb-4fee-b8ee-ab5e6657c709 is in state STARTED 2026-04-05 01:02:21.822307 | orchestrator | 2026-04-05 01:02:21 | INFO  | Task 79a2f1f3-5b96-47a3-8ba3-1190a02e9ced is in state STARTED 2026-04-05 01:02:21.825079 | orchestrator | 2026-04-05 01:02:21 | INFO  | Task 4a2602ed-76cc-4a67-94e6-216d83a097ad is in state STARTED 2026-04-05 01:02:21.825970 | orchestrator | 2026-04-05 01:02:21 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:02:24.862006 | orchestrator | 2026-04-05 01:02:24 | INFO  | Task ed55e999-0d10-4489-891f-422b9f86068d is in state STARTED 2026-04-05 01:02:24.865016 | orchestrator | 2026-04-05 01:02:24 | INFO  | Task daeb651d-59f5-4d09-a822-e33f38a3d3e8 is in state STARTED 2026-04-05 01:02:24.867856 | orchestrator | 2026-04-05 01:02:24 | INFO  | Task 8697bf9a-40eb-4fee-b8ee-ab5e6657c709 is in state STARTED 2026-04-05 01:02:24.871786 | orchestrator | 2026-04-05 01:02:24 | INFO  | Task 79a2f1f3-5b96-47a3-8ba3-1190a02e9ced is in state STARTED 2026-04-05 01:02:24.874262 | orchestrator | 2026-04-05 01:02:24 | INFO  | Task 4a2602ed-76cc-4a67-94e6-216d83a097ad is in state STARTED 2026-04-05 01:02:24.874312 | orchestrator | 2026-04-05 01:02:24 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:02:27.914226 | orchestrator | 2026-04-05 01:02:27 | INFO  | Task ed55e999-0d10-4489-891f-422b9f86068d is in state STARTED 2026-04-05 01:02:27.915777 | orchestrator | 2026-04-05 01:02:27 | INFO  | Task daeb651d-59f5-4d09-a822-e33f38a3d3e8 is in state STARTED 2026-04-05 01:02:27.916669 | orchestrator | 2026-04-05 01:02:27 | INFO  | Task 8697bf9a-40eb-4fee-b8ee-ab5e6657c709 is in state STARTED 2026-04-05 01:02:27.917559 | orchestrator | 2026-04-05 01:02:27 | INFO  | Task 79a2f1f3-5b96-47a3-8ba3-1190a02e9ced is in state STARTED 2026-04-05 01:02:27.918319 | orchestrator | 2026-04-05 01:02:27 | INFO  | Task 4a2602ed-76cc-4a67-94e6-216d83a097ad is in state STARTED 2026-04-05 01:02:27.918453 | orchestrator | 2026-04-05 01:02:27 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:02:30.946244 | orchestrator | 2026-04-05 01:02:30 | INFO  | Task ed55e999-0d10-4489-891f-422b9f86068d is in state STARTED 2026-04-05 01:02:30.946530 | orchestrator | 2026-04-05 01:02:30 | INFO  | Task daeb651d-59f5-4d09-a822-e33f38a3d3e8 is in state STARTED 2026-04-05 01:02:30.947429 | orchestrator | 2026-04-05 01:02:30 | INFO  | Task 8697bf9a-40eb-4fee-b8ee-ab5e6657c709 is in state STARTED 2026-04-05 01:02:30.948413 | orchestrator | 2026-04-05 01:02:30 | INFO  | Task 79a2f1f3-5b96-47a3-8ba3-1190a02e9ced is in state STARTED 2026-04-05 01:02:30.949127 | orchestrator | 2026-04-05 01:02:30 | INFO  | Task 4a2602ed-76cc-4a67-94e6-216d83a097ad is in state STARTED 2026-04-05 01:02:30.949175 | orchestrator | 2026-04-05 01:02:30 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:02:33.976236 | orchestrator | 2026-04-05 01:02:33 | INFO  | Task ed55e999-0d10-4489-891f-422b9f86068d is in state STARTED 2026-04-05 01:02:33.979037 | orchestrator | 2026-04-05 01:02:33 | INFO  | Task daeb651d-59f5-4d09-a822-e33f38a3d3e8 is in state STARTED 2026-04-05 01:02:33.979089 | orchestrator | 2026-04-05 01:02:33 | INFO  | Task 8697bf9a-40eb-4fee-b8ee-ab5e6657c709 is in state STARTED 2026-04-05 01:02:33.979102 | orchestrator | 2026-04-05 01:02:33 | INFO  | Task 79a2f1f3-5b96-47a3-8ba3-1190a02e9ced is in state STARTED 2026-04-05 01:02:33.979113 | orchestrator | 2026-04-05 01:02:33 | INFO  | Task 4a2602ed-76cc-4a67-94e6-216d83a097ad is in state STARTED 2026-04-05 01:02:33.979124 | orchestrator | 2026-04-05 01:02:33 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:02:37.277547 | orchestrator | 2026-04-05 01:02:37 | INFO  | Task ed55e999-0d10-4489-891f-422b9f86068d is in state STARTED 2026-04-05 01:02:37.277647 | orchestrator | 2026-04-05 01:02:37 | INFO  | Task daeb651d-59f5-4d09-a822-e33f38a3d3e8 is in state STARTED 2026-04-05 01:02:37.277661 | orchestrator | 2026-04-05 01:02:37 | INFO  | Task 8697bf9a-40eb-4fee-b8ee-ab5e6657c709 is in state STARTED 2026-04-05 01:02:37.277674 | orchestrator | 2026-04-05 01:02:37 | INFO  | Task 79a2f1f3-5b96-47a3-8ba3-1190a02e9ced is in state STARTED 2026-04-05 01:02:37.277684 | orchestrator | 2026-04-05 01:02:37 | INFO  | Task 4a2602ed-76cc-4a67-94e6-216d83a097ad is in state STARTED 2026-04-05 01:02:37.277695 | orchestrator | 2026-04-05 01:02:37 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:02:40.042292 | orchestrator | 2026-04-05 01:02:40 | INFO  | Task ed55e999-0d10-4489-891f-422b9f86068d is in state STARTED 2026-04-05 01:02:40.043686 | orchestrator | 2026-04-05 01:02:40 | INFO  | Task daeb651d-59f5-4d09-a822-e33f38a3d3e8 is in state SUCCESS 2026-04-05 01:02:40.045465 | orchestrator | 2026-04-05 01:02:40.045498 | orchestrator | 2026-04-05 01:02:40.045508 | orchestrator | PLAY [Apply role cephclient] *************************************************** 2026-04-05 01:02:40.045516 | orchestrator | 2026-04-05 01:02:40.045524 | orchestrator | TASK [osism.services.cephclient : Include container tasks] ********************* 2026-04-05 01:02:40.045531 | orchestrator | Sunday 05 April 2026 01:00:56 +0000 (0:00:00.355) 0:00:00.355 ********** 2026-04-05 01:02:40.045539 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/cephclient/tasks/container.yml for testbed-manager 2026-04-05 01:02:40.045547 | orchestrator | 2026-04-05 01:02:40.045554 | orchestrator | TASK [osism.services.cephclient : Create required directories] ***************** 2026-04-05 01:02:40.045581 | orchestrator | Sunday 05 April 2026 01:00:56 +0000 (0:00:00.238) 0:00:00.593 ********** 2026-04-05 01:02:40.045590 | orchestrator | changed: [testbed-manager] => (item=/opt/cephclient/configuration) 2026-04-05 01:02:40.045597 | orchestrator | changed: [testbed-manager] => (item=/opt/cephclient/data) 2026-04-05 01:02:40.045663 | orchestrator | ok: [testbed-manager] => (item=/opt/cephclient) 2026-04-05 01:02:40.045672 | orchestrator | 2026-04-05 01:02:40.045680 | orchestrator | TASK [osism.services.cephclient : Copy configuration files] ******************** 2026-04-05 01:02:40.045687 | orchestrator | Sunday 05 April 2026 01:00:58 +0000 (0:00:01.642) 0:00:02.235 ********** 2026-04-05 01:02:40.045695 | orchestrator | changed: [testbed-manager] => (item={'src': 'ceph.conf.j2', 'dest': '/opt/cephclient/configuration/ceph.conf'}) 2026-04-05 01:02:40.045702 | orchestrator | 2026-04-05 01:02:40.045709 | orchestrator | TASK [osism.services.cephclient : Copy keyring file] *************************** 2026-04-05 01:02:40.045717 | orchestrator | Sunday 05 April 2026 01:00:59 +0000 (0:00:01.229) 0:00:03.465 ********** 2026-04-05 01:02:40.045724 | orchestrator | changed: [testbed-manager] 2026-04-05 01:02:40.045750 | orchestrator | 2026-04-05 01:02:40.045758 | orchestrator | TASK [osism.services.cephclient : Copy docker-compose.yml file] **************** 2026-04-05 01:02:40.045765 | orchestrator | Sunday 05 April 2026 01:01:00 +0000 (0:00:00.915) 0:00:04.380 ********** 2026-04-05 01:02:40.045773 | orchestrator | changed: [testbed-manager] 2026-04-05 01:02:40.045780 | orchestrator | 2026-04-05 01:02:40.045787 | orchestrator | TASK [osism.services.cephclient : Manage cephclient service] ******************* 2026-04-05 01:02:40.045794 | orchestrator | Sunday 05 April 2026 01:01:01 +0000 (0:00:00.973) 0:00:05.353 ********** 2026-04-05 01:02:40.045801 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage cephclient service (10 retries left). 2026-04-05 01:02:40.045808 | orchestrator | ok: [testbed-manager] 2026-04-05 01:02:40.045816 | orchestrator | 2026-04-05 01:02:40.045823 | orchestrator | TASK [osism.services.cephclient : Copy wrapper scripts] ************************ 2026-04-05 01:02:40.045830 | orchestrator | Sunday 05 April 2026 01:01:45 +0000 (0:00:44.426) 0:00:49.780 ********** 2026-04-05 01:02:40.045838 | orchestrator | changed: [testbed-manager] => (item=ceph) 2026-04-05 01:02:40.045845 | orchestrator | changed: [testbed-manager] => (item=ceph-authtool) 2026-04-05 01:02:40.045852 | orchestrator | changed: [testbed-manager] => (item=rados) 2026-04-05 01:02:40.045885 | orchestrator | changed: [testbed-manager] => (item=radosgw-admin) 2026-04-05 01:02:40.045893 | orchestrator | changed: [testbed-manager] => (item=rbd) 2026-04-05 01:02:40.045901 | orchestrator | 2026-04-05 01:02:40.045914 | orchestrator | TASK [osism.services.cephclient : Remove old wrapper scripts] ****************** 2026-04-05 01:02:40.045926 | orchestrator | Sunday 05 April 2026 01:01:50 +0000 (0:00:04.423) 0:00:54.203 ********** 2026-04-05 01:02:40.046106 | orchestrator | ok: [testbed-manager] => (item=crushtool) 2026-04-05 01:02:40.046116 | orchestrator | 2026-04-05 01:02:40.046125 | orchestrator | TASK [osism.services.cephclient : Include package tasks] *********************** 2026-04-05 01:02:40.046135 | orchestrator | Sunday 05 April 2026 01:01:50 +0000 (0:00:00.617) 0:00:54.821 ********** 2026-04-05 01:02:40.046155 | orchestrator | skipping: [testbed-manager] 2026-04-05 01:02:40.046169 | orchestrator | 2026-04-05 01:02:40.046177 | orchestrator | TASK [osism.services.cephclient : Include rook task] *************************** 2026-04-05 01:02:40.046186 | orchestrator | Sunday 05 April 2026 01:01:50 +0000 (0:00:00.130) 0:00:54.951 ********** 2026-04-05 01:02:40.046196 | orchestrator | skipping: [testbed-manager] 2026-04-05 01:02:40.046204 | orchestrator | 2026-04-05 01:02:40.046215 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Restart cephclient service] ******* 2026-04-05 01:02:40.046224 | orchestrator | Sunday 05 April 2026 01:01:51 +0000 (0:00:00.336) 0:00:55.288 ********** 2026-04-05 01:02:40.046233 | orchestrator | changed: [testbed-manager] 2026-04-05 01:02:40.046242 | orchestrator | 2026-04-05 01:02:40.046251 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Ensure that all containers are up] *** 2026-04-05 01:02:40.046260 | orchestrator | Sunday 05 April 2026 01:01:52 +0000 (0:00:01.525) 0:00:56.814 ********** 2026-04-05 01:02:40.046268 | orchestrator | changed: [testbed-manager] 2026-04-05 01:02:40.046277 | orchestrator | 2026-04-05 01:02:40.046286 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Wait for an healthy service] ****** 2026-04-05 01:02:40.046294 | orchestrator | Sunday 05 April 2026 01:01:53 +0000 (0:00:00.763) 0:00:57.578 ********** 2026-04-05 01:02:40.046303 | orchestrator | changed: [testbed-manager] 2026-04-05 01:02:40.046312 | orchestrator | 2026-04-05 01:02:40.046320 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Copy bash completion scripts] ***** 2026-04-05 01:02:40.046329 | orchestrator | Sunday 05 April 2026 01:01:54 +0000 (0:00:00.607) 0:00:58.185 ********** 2026-04-05 01:02:40.046337 | orchestrator | ok: [testbed-manager] => (item=ceph) 2026-04-05 01:02:40.046346 | orchestrator | ok: [testbed-manager] => (item=rados) 2026-04-05 01:02:40.046355 | orchestrator | ok: [testbed-manager] => (item=radosgw-admin) 2026-04-05 01:02:40.046364 | orchestrator | ok: [testbed-manager] => (item=rbd) 2026-04-05 01:02:40.046373 | orchestrator | 2026-04-05 01:02:40.046382 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-05 01:02:40.046392 | orchestrator | testbed-manager : ok=12  changed=8  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-05 01:02:40.046409 | orchestrator | 2026-04-05 01:02:40.046418 | orchestrator | 2026-04-05 01:02:40.046438 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-05 01:02:40.046448 | orchestrator | Sunday 05 April 2026 01:01:55 +0000 (0:00:01.561) 0:00:59.747 ********** 2026-04-05 01:02:40.046457 | orchestrator | =============================================================================== 2026-04-05 01:02:40.046466 | orchestrator | osism.services.cephclient : Manage cephclient service ------------------ 44.43s 2026-04-05 01:02:40.046473 | orchestrator | osism.services.cephclient : Copy wrapper scripts ------------------------ 4.42s 2026-04-05 01:02:40.046480 | orchestrator | osism.services.cephclient : Create required directories ----------------- 1.64s 2026-04-05 01:02:40.046487 | orchestrator | osism.services.cephclient : Copy bash completion scripts ---------------- 1.56s 2026-04-05 01:02:40.046494 | orchestrator | osism.services.cephclient : Restart cephclient service ------------------ 1.53s 2026-04-05 01:02:40.046502 | orchestrator | osism.services.cephclient : Copy configuration files -------------------- 1.23s 2026-04-05 01:02:40.046509 | orchestrator | osism.services.cephclient : Copy docker-compose.yml file ---------------- 0.97s 2026-04-05 01:02:40.046516 | orchestrator | osism.services.cephclient : Copy keyring file --------------------------- 0.92s 2026-04-05 01:02:40.046523 | orchestrator | osism.services.cephclient : Ensure that all containers are up ----------- 0.76s 2026-04-05 01:02:40.046530 | orchestrator | osism.services.cephclient : Remove old wrapper scripts ------------------ 0.62s 2026-04-05 01:02:40.046537 | orchestrator | osism.services.cephclient : Wait for an healthy service ----------------- 0.61s 2026-04-05 01:02:40.046544 | orchestrator | osism.services.cephclient : Include rook task --------------------------- 0.34s 2026-04-05 01:02:40.046551 | orchestrator | osism.services.cephclient : Include container tasks --------------------- 0.24s 2026-04-05 01:02:40.046558 | orchestrator | osism.services.cephclient : Include package tasks ----------------------- 0.13s 2026-04-05 01:02:40.046565 | orchestrator | 2026-04-05 01:02:40.046572 | orchestrator | 2026-04-05 01:02:40.046579 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-05 01:02:40.046587 | orchestrator | 2026-04-05 01:02:40.046594 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-05 01:02:40.046601 | orchestrator | Sunday 05 April 2026 01:01:59 +0000 (0:00:00.195) 0:00:00.195 ********** 2026-04-05 01:02:40.046608 | orchestrator | ok: [testbed-node-0] 2026-04-05 01:02:40.046615 | orchestrator | ok: [testbed-node-1] 2026-04-05 01:02:40.046622 | orchestrator | ok: [testbed-node-2] 2026-04-05 01:02:40.046629 | orchestrator | 2026-04-05 01:02:40.046637 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-05 01:02:40.046644 | orchestrator | Sunday 05 April 2026 01:01:59 +0000 (0:00:00.380) 0:00:00.576 ********** 2026-04-05 01:02:40.046651 | orchestrator | ok: [testbed-node-0] => (item=enable_keystone_True) 2026-04-05 01:02:40.046658 | orchestrator | ok: [testbed-node-1] => (item=enable_keystone_True) 2026-04-05 01:02:40.046666 | orchestrator | ok: [testbed-node-2] => (item=enable_keystone_True) 2026-04-05 01:02:40.046673 | orchestrator | 2026-04-05 01:02:40.046680 | orchestrator | PLAY [Wait for the Keystone service] ******************************************* 2026-04-05 01:02:40.046687 | orchestrator | 2026-04-05 01:02:40.046694 | orchestrator | TASK [Waiting for Keystone public port to be UP] ******************************* 2026-04-05 01:02:40.046702 | orchestrator | Sunday 05 April 2026 01:02:00 +0000 (0:00:00.549) 0:00:01.125 ********** 2026-04-05 01:02:40.046709 | orchestrator | ok: [testbed-node-2] 2026-04-05 01:02:40.046716 | orchestrator | ok: [testbed-node-1] 2026-04-05 01:02:40.046723 | orchestrator | ok: [testbed-node-0] 2026-04-05 01:02:40.046730 | orchestrator | 2026-04-05 01:02:40.046737 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-05 01:02:40.046745 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-05 01:02:40.046760 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-05 01:02:40.046768 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-05 01:02:40.046775 | orchestrator | 2026-04-05 01:02:40.046782 | orchestrator | 2026-04-05 01:02:40.046789 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-05 01:02:40.046797 | orchestrator | Sunday 05 April 2026 01:02:01 +0000 (0:00:01.138) 0:00:02.263 ********** 2026-04-05 01:02:40.046804 | orchestrator | =============================================================================== 2026-04-05 01:02:40.046811 | orchestrator | Waiting for Keystone public port to be UP ------------------------------- 1.14s 2026-04-05 01:02:40.046818 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.55s 2026-04-05 01:02:40.046825 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.38s 2026-04-05 01:02:40.046832 | orchestrator | 2026-04-05 01:02:40.046839 | orchestrator | 2026-04-05 01:02:40.046846 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-05 01:02:40.046853 | orchestrator | 2026-04-05 01:02:40.046903 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-05 01:02:40.046910 | orchestrator | Sunday 05 April 2026 00:59:47 +0000 (0:00:00.324) 0:00:00.324 ********** 2026-04-05 01:02:40.046917 | orchestrator | ok: [testbed-node-0] 2026-04-05 01:02:40.046925 | orchestrator | ok: [testbed-node-1] 2026-04-05 01:02:40.046932 | orchestrator | ok: [testbed-node-2] 2026-04-05 01:02:40.046939 | orchestrator | 2026-04-05 01:02:40.046946 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-05 01:02:40.046953 | orchestrator | Sunday 05 April 2026 00:59:47 +0000 (0:00:00.319) 0:00:00.644 ********** 2026-04-05 01:02:40.046961 | orchestrator | ok: [testbed-node-0] => (item=enable_keystone_True) 2026-04-05 01:02:40.046968 | orchestrator | ok: [testbed-node-1] => (item=enable_keystone_True) 2026-04-05 01:02:40.046975 | orchestrator | ok: [testbed-node-2] => (item=enable_keystone_True) 2026-04-05 01:02:40.046982 | orchestrator | 2026-04-05 01:02:40.046990 | orchestrator | PLAY [Apply role keystone] ***************************************************** 2026-04-05 01:02:40.046997 | orchestrator | 2026-04-05 01:02:40.047017 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-04-05 01:02:40.047024 | orchestrator | Sunday 05 April 2026 00:59:47 +0000 (0:00:00.300) 0:00:00.944 ********** 2026-04-05 01:02:40.047031 | orchestrator | included: /ansible/roles/keystone/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 01:02:40.047039 | orchestrator | 2026-04-05 01:02:40.047046 | orchestrator | TASK [keystone : Ensuring config directories exist] **************************** 2026-04-05 01:02:40.047053 | orchestrator | Sunday 05 April 2026 00:59:48 +0000 (0:00:00.668) 0:00:01.613 ********** 2026-04-05 01:02:40.047065 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-04-05 01:02:40.047076 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-04-05 01:02:40.047095 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-04-05 01:02:40.047104 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-04-05 01:02:40.047119 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-04-05 01:02:40.047127 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-04-05 01:02:40.047135 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-05 01:02:40.047148 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-05 01:02:40.047164 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-05 01:02:40.047172 | orchestrator | 2026-04-05 01:02:40.047179 | orchestrator | TASK [keystone : Check if policies shall be overwritten] *********************** 2026-04-05 01:02:40.047187 | orchestrator | Sunday 05 April 2026 00:59:50 +0000 (0:00:02.149) 0:00:03.762 ********** 2026-04-05 01:02:40.047194 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:02:40.047202 | orchestrator | 2026-04-05 01:02:40.047209 | orchestrator | TASK [keystone : Set keystone policy file] ************************************* 2026-04-05 01:02:40.047216 | orchestrator | Sunday 05 April 2026 00:59:50 +0000 (0:00:00.119) 0:00:03.882 ********** 2026-04-05 01:02:40.047223 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:02:40.047230 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:02:40.047237 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:02:40.047244 | orchestrator | 2026-04-05 01:02:40.047251 | orchestrator | TASK [keystone : Check if Keystone domain-specific config is supplied] ********* 2026-04-05 01:02:40.047259 | orchestrator | Sunday 05 April 2026 00:59:50 +0000 (0:00:00.278) 0:00:04.160 ********** 2026-04-05 01:02:40.047266 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-05 01:02:40.047273 | orchestrator | 2026-04-05 01:02:40.047280 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-04-05 01:02:40.047287 | orchestrator | Sunday 05 April 2026 00:59:51 +0000 (0:00:00.938) 0:00:05.098 ********** 2026-04-05 01:02:40.047294 | orchestrator | included: /ansible/roles/keystone/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 01:02:40.047302 | orchestrator | 2026-04-05 01:02:40.047309 | orchestrator | TASK [service-cert-copy : keystone | Copying over extra CA certificates] ******* 2026-04-05 01:02:40.047320 | orchestrator | Sunday 05 April 2026 00:59:52 +0000 (0:00:00.713) 0:00:05.812 ********** 2026-04-05 01:02:40.047329 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-04-05 01:02:40.047342 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-04-05 01:02:40.047353 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-04-05 01:02:40.047362 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-04-05 01:02:40.047375 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-04-05 01:02:40.047384 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-04-05 01:02:40.047404 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-05 01:02:40.047417 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-05 01:02:40.047438 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-05 01:02:40.047457 | orchestrator | 2026-04-05 01:02:40.047471 | orchestrator | TASK [service-cert-copy : keystone | Copying over backend internal TLS certificate] *** 2026-04-05 01:02:40.047485 | orchestrator | Sunday 05 April 2026 00:59:55 +0000 (0:00:03.278) 0:00:09.090 ********** 2026-04-05 01:02:40.047499 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-04-05 01:02:40.047522 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-05 01:02:40.047537 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-05 01:02:40.047545 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:02:40.047553 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-04-05 01:02:40.047565 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-05 01:02:40.047573 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-05 01:02:40.047581 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:02:40.047594 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-04-05 01:02:40.047606 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-05 01:02:40.047614 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-05 01:02:40.047622 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:02:40.047629 | orchestrator | 2026-04-05 01:02:40.047642 | orchestrator | TASK [service-cert-copy : keystone | Copying over backend internal TLS key] **** 2026-04-05 01:02:40.047677 | orchestrator | Sunday 05 April 2026 00:59:56 +0000 (0:00:00.623) 0:00:09.713 ********** 2026-04-05 01:02:40.047696 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-04-05 01:02:40.047710 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-05 01:02:40.047724 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-05 01:02:40.047736 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:02:40.047759 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-04-05 01:02:40.047768 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-05 01:02:40.047776 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-05 01:02:40.047784 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:02:40.047794 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-04-05 01:02:40.047803 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-05 01:02:40.047819 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-05 01:02:40.047832 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:02:40.047845 | orchestrator | 2026-04-05 01:02:40.047873 | orchestrator | TASK [keystone : Copying over config.json files for services] ****************** 2026-04-05 01:02:40.047893 | orchestrator | Sunday 05 April 2026 00:59:57 +0000 (0:00:00.996) 0:00:10.709 ********** 2026-04-05 01:02:40.047908 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-04-05 01:02:40.047922 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-04-05 01:02:40.047942 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-04-05 01:02:40.047969 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-04-05 01:02:40.047978 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-04-05 01:02:40.047985 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-04-05 01:02:40.047993 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-05 01:02:40.048001 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-05 01:02:40.048011 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-05 01:02:40.048019 | orchestrator | 2026-04-05 01:02:40.048026 | orchestrator | TASK [keystone : Copying over keystone.conf] *********************************** 2026-04-05 01:02:40.048034 | orchestrator | Sunday 05 April 2026 01:00:00 +0000 (0:00:03.348) 0:00:14.058 ********** 2026-04-05 01:02:40.048046 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-04-05 01:02:40.048061 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-05 01:02:40.048069 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-04-05 01:02:40.048077 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-05 01:02:40.048089 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-04-05 01:02:40.048101 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-05 01:02:40.048114 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-05 01:02:40.048122 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-05 01:02:40.048129 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-05 01:02:40.048136 | orchestrator | 2026-04-05 01:02:40.048144 | orchestrator | TASK [keystone : Copying keystone-startup script for keystone] ***************** 2026-04-05 01:02:40.048151 | orchestrator | Sunday 05 April 2026 01:00:06 +0000 (0:00:05.815) 0:00:19.874 ********** 2026-04-05 01:02:40.048158 | orchestrator | changed: [testbed-node-1] 2026-04-05 01:02:40.048166 | orchestrator | changed: [testbed-node-2] 2026-04-05 01:02:40.048173 | orchestrator | changed: [testbed-node-0] 2026-04-05 01:02:40.048180 | orchestrator | 2026-04-05 01:02:40.048187 | orchestrator | TASK [keystone : Create Keystone domain-specific config directory] ************* 2026-04-05 01:02:40.048195 | orchestrator | Sunday 05 April 2026 01:00:07 +0000 (0:00:01.383) 0:00:21.257 ********** 2026-04-05 01:02:40.048202 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:02:40.048209 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:02:40.048216 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:02:40.048223 | orchestrator | 2026-04-05 01:02:40.048230 | orchestrator | TASK [keystone : Get file list in custom domains folder] *********************** 2026-04-05 01:02:40.048238 | orchestrator | Sunday 05 April 2026 01:00:08 +0000 (0:00:00.817) 0:00:22.075 ********** 2026-04-05 01:02:40.048245 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:02:40.048252 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:02:40.048263 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:02:40.048271 | orchestrator | 2026-04-05 01:02:40.048281 | orchestrator | TASK [keystone : Copying Keystone Domain specific settings] ******************** 2026-04-05 01:02:40.048288 | orchestrator | Sunday 05 April 2026 01:00:09 +0000 (0:00:00.279) 0:00:22.354 ********** 2026-04-05 01:02:40.048295 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:02:40.048302 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:02:40.048310 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:02:40.048317 | orchestrator | 2026-04-05 01:02:40.048324 | orchestrator | TASK [keystone : Copying over existing policy file] **************************** 2026-04-05 01:02:40.048331 | orchestrator | Sunday 05 April 2026 01:00:09 +0000 (0:00:00.275) 0:00:22.630 ********** 2026-04-05 01:02:40.048339 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-04-05 01:02:40.048352 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-05 01:02:40.048360 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-05 01:02:40.048367 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:02:40.048375 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-04-05 01:02:40.048390 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-05 01:02:40.048398 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-05 01:02:40.048406 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:02:40.048430 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-04-05 01:02:40.048439 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-05 01:02:40.048447 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-05 01:02:40.048455 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:02:40.048462 | orchestrator | 2026-04-05 01:02:40.048469 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-04-05 01:02:40.048477 | orchestrator | Sunday 05 April 2026 01:00:09 +0000 (0:00:00.529) 0:00:23.159 ********** 2026-04-05 01:02:40.048488 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:02:40.048496 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:02:40.048503 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:02:40.048516 | orchestrator | 2026-04-05 01:02:40.048528 | orchestrator | TASK [keystone : Copying over wsgi-keystone.conf] ****************************** 2026-04-05 01:02:40.048540 | orchestrator | Sunday 05 April 2026 01:00:10 +0000 (0:00:00.377) 0:00:23.537 ********** 2026-04-05 01:02:40.048558 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2026-04-05 01:02:40.048572 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2026-04-05 01:02:40.048584 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2026-04-05 01:02:40.048596 | orchestrator | 2026-04-05 01:02:40.048609 | orchestrator | TASK [keystone : Checking whether keystone-paste.ini file exists] ************** 2026-04-05 01:02:40.048620 | orchestrator | Sunday 05 April 2026 01:00:11 +0000 (0:00:01.501) 0:00:25.038 ********** 2026-04-05 01:02:40.048631 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-05 01:02:40.048644 | orchestrator | 2026-04-05 01:02:40.048663 | orchestrator | TASK [keystone : Copying over keystone-paste.ini] ****************************** 2026-04-05 01:02:40.048675 | orchestrator | Sunday 05 April 2026 01:00:12 +0000 (0:00:00.904) 0:00:25.942 ********** 2026-04-05 01:02:40.048688 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:02:40.048698 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:02:40.048706 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:02:40.048713 | orchestrator | 2026-04-05 01:02:40.048721 | orchestrator | TASK [keystone : Generate the required cron jobs for the node] ***************** 2026-04-05 01:02:40.048728 | orchestrator | Sunday 05 April 2026 01:00:13 +0000 (0:00:00.763) 0:00:26.706 ********** 2026-04-05 01:02:40.048735 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-05 01:02:40.048742 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-04-05 01:02:40.048749 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-04-05 01:02:40.048756 | orchestrator | 2026-04-05 01:02:40.048764 | orchestrator | TASK [keystone : Set fact with the generated cron jobs for building the crontab later] *** 2026-04-05 01:02:40.048771 | orchestrator | Sunday 05 April 2026 01:00:14 +0000 (0:00:01.363) 0:00:28.069 ********** 2026-04-05 01:02:40.048778 | orchestrator | ok: [testbed-node-0] 2026-04-05 01:02:40.048786 | orchestrator | ok: [testbed-node-1] 2026-04-05 01:02:40.048793 | orchestrator | ok: [testbed-node-2] 2026-04-05 01:02:40.048800 | orchestrator | 2026-04-05 01:02:40.048807 | orchestrator | TASK [keystone : Copying files for keystone-fernet] **************************** 2026-04-05 01:02:40.048814 | orchestrator | Sunday 05 April 2026 01:00:15 +0000 (0:00:00.554) 0:00:28.624 ********** 2026-04-05 01:02:40.048822 | orchestrator | changed: [testbed-node-0] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2026-04-05 01:02:40.048829 | orchestrator | changed: [testbed-node-1] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2026-04-05 01:02:40.048836 | orchestrator | changed: [testbed-node-2] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2026-04-05 01:02:40.048843 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2026-04-05 01:02:40.048851 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2026-04-05 01:02:40.048877 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2026-04-05 01:02:40.048885 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2026-04-05 01:02:40.048893 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2026-04-05 01:02:40.048900 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2026-04-05 01:02:40.048907 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2026-04-05 01:02:40.048922 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2026-04-05 01:02:40.048929 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2026-04-05 01:02:40.048937 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2026-04-05 01:02:40.048944 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2026-04-05 01:02:40.048951 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2026-04-05 01:02:40.048958 | orchestrator | changed: [testbed-node-1] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-04-05 01:02:40.048966 | orchestrator | changed: [testbed-node-0] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-04-05 01:02:40.048973 | orchestrator | changed: [testbed-node-2] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-04-05 01:02:40.048980 | orchestrator | changed: [testbed-node-1] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-04-05 01:02:40.048988 | orchestrator | changed: [testbed-node-0] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-04-05 01:02:40.048995 | orchestrator | changed: [testbed-node-2] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-04-05 01:02:40.049002 | orchestrator | 2026-04-05 01:02:40.049009 | orchestrator | TASK [keystone : Copying files for keystone-ssh] ******************************* 2026-04-05 01:02:40.049016 | orchestrator | Sunday 05 April 2026 01:00:24 +0000 (0:00:09.307) 0:00:37.931 ********** 2026-04-05 01:02:40.049023 | orchestrator | changed: [testbed-node-0] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-04-05 01:02:40.049030 | orchestrator | changed: [testbed-node-1] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-04-05 01:02:40.049038 | orchestrator | changed: [testbed-node-2] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-04-05 01:02:40.049045 | orchestrator | changed: [testbed-node-0] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-04-05 01:02:40.049052 | orchestrator | changed: [testbed-node-1] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-04-05 01:02:40.049059 | orchestrator | changed: [testbed-node-2] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-04-05 01:02:40.049066 | orchestrator | 2026-04-05 01:02:40.049074 | orchestrator | TASK [keystone : Check keystone containers] ************************************ 2026-04-05 01:02:40.049081 | orchestrator | Sunday 05 April 2026 01:00:27 +0000 (0:00:02.635) 0:00:40.567 ********** 2026-04-05 01:02:40.049092 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-04-05 01:02:40.049105 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-04-05 01:02:40.049119 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-04-05 01:02:40.049127 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-04-05 01:02:40.049138 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-04-05 01:02:40.049146 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-04-05 01:02:40.049154 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-05 01:02:40.049170 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-05 01:02:40.049178 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-05 01:02:40.049186 | orchestrator | 2026-04-05 01:02:40.049193 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-04-05 01:02:40.049201 | orchestrator | Sunday 05 April 2026 01:00:29 +0000 (0:00:02.434) 0:00:43.001 ********** 2026-04-05 01:02:40.049208 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:02:40.049215 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:02:40.049222 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:02:40.049230 | orchestrator | 2026-04-05 01:02:40.049237 | orchestrator | TASK [keystone : Creating keystone database] *********************************** 2026-04-05 01:02:40.049244 | orchestrator | Sunday 05 April 2026 01:00:30 +0000 (0:00:00.482) 0:00:43.484 ********** 2026-04-05 01:02:40.049251 | orchestrator | changed: [testbed-node-0] 2026-04-05 01:02:40.049258 | orchestrator | 2026-04-05 01:02:40.049265 | orchestrator | TASK [keystone : Creating Keystone database user and setting permissions] ****** 2026-04-05 01:02:40.049273 | orchestrator | Sunday 05 April 2026 01:00:32 +0000 (0:00:02.421) 0:00:45.906 ********** 2026-04-05 01:02:40.049291 | orchestrator | changed: [testbed-node-0] 2026-04-05 01:02:40.049298 | orchestrator | 2026-04-05 01:02:40.049306 | orchestrator | TASK [keystone : Checking for any running keystone_fernet containers] ********** 2026-04-05 01:02:40.049319 | orchestrator | Sunday 05 April 2026 01:00:34 +0000 (0:00:02.321) 0:00:48.227 ********** 2026-04-05 01:02:40.049327 | orchestrator | ok: [testbed-node-1] 2026-04-05 01:02:40.049334 | orchestrator | ok: [testbed-node-0] 2026-04-05 01:02:40.049341 | orchestrator | ok: [testbed-node-2] 2026-04-05 01:02:40.049348 | orchestrator | 2026-04-05 01:02:40.049356 | orchestrator | TASK [keystone : Group nodes where keystone_fernet is running] ***************** 2026-04-05 01:02:40.049363 | orchestrator | Sunday 05 April 2026 01:00:35 +0000 (0:00:00.861) 0:00:49.089 ********** 2026-04-05 01:02:40.049370 | orchestrator | ok: [testbed-node-0] 2026-04-05 01:02:40.049377 | orchestrator | ok: [testbed-node-1] 2026-04-05 01:02:40.049384 | orchestrator | ok: [testbed-node-2] 2026-04-05 01:02:40.049392 | orchestrator | 2026-04-05 01:02:40.049399 | orchestrator | TASK [keystone : Fail if any hosts need bootstrapping and not all hosts targeted] *** 2026-04-05 01:02:40.049406 | orchestrator | Sunday 05 April 2026 01:00:36 +0000 (0:00:00.325) 0:00:49.414 ********** 2026-04-05 01:02:40.049414 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:02:40.049421 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:02:40.049428 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:02:40.049436 | orchestrator | 2026-04-05 01:02:40.049443 | orchestrator | TASK [keystone : Running Keystone bootstrap container] ************************* 2026-04-05 01:02:40.049454 | orchestrator | Sunday 05 April 2026 01:00:36 +0000 (0:00:00.338) 0:00:49.753 ********** 2026-04-05 01:02:40.049470 | orchestrator | changed: [testbed-node-0] 2026-04-05 01:02:40.049478 | orchestrator | 2026-04-05 01:02:40.049485 | orchestrator | TASK [keystone : Running Keystone fernet bootstrap container] ****************** 2026-04-05 01:02:40.049492 | orchestrator | Sunday 05 April 2026 01:00:52 +0000 (0:00:16.318) 0:01:06.071 ********** 2026-04-05 01:02:40.049500 | orchestrator | changed: [testbed-node-0] 2026-04-05 01:02:40.049507 | orchestrator | 2026-04-05 01:02:40.049514 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2026-04-05 01:02:40.049521 | orchestrator | Sunday 05 April 2026 01:01:05 +0000 (0:00:12.707) 0:01:18.778 ********** 2026-04-05 01:02:40.049529 | orchestrator | 2026-04-05 01:02:40.049536 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2026-04-05 01:02:40.049543 | orchestrator | Sunday 05 April 2026 01:01:05 +0000 (0:00:00.065) 0:01:18.843 ********** 2026-04-05 01:02:40.049551 | orchestrator | 2026-04-05 01:02:40.049558 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2026-04-05 01:02:40.049565 | orchestrator | Sunday 05 April 2026 01:01:05 +0000 (0:00:00.068) 0:01:18.912 ********** 2026-04-05 01:02:40.049572 | orchestrator | 2026-04-05 01:02:40.049579 | orchestrator | RUNNING HANDLER [keystone : Restart keystone-ssh container] ******************** 2026-04-05 01:02:40.049587 | orchestrator | Sunday 05 April 2026 01:01:05 +0000 (0:00:00.067) 0:01:18.979 ********** 2026-04-05 01:02:40.049594 | orchestrator | changed: [testbed-node-0] 2026-04-05 01:02:40.049601 | orchestrator | changed: [testbed-node-1] 2026-04-05 01:02:40.049608 | orchestrator | changed: [testbed-node-2] 2026-04-05 01:02:40.049616 | orchestrator | 2026-04-05 01:02:40.049623 | orchestrator | RUNNING HANDLER [keystone : Restart keystone-fernet container] ***************** 2026-04-05 01:02:40.049630 | orchestrator | Sunday 05 April 2026 01:01:26 +0000 (0:00:20.903) 0:01:39.882 ********** 2026-04-05 01:02:40.049637 | orchestrator | changed: [testbed-node-0] 2026-04-05 01:02:40.049645 | orchestrator | changed: [testbed-node-2] 2026-04-05 01:02:40.049652 | orchestrator | changed: [testbed-node-1] 2026-04-05 01:02:40.049659 | orchestrator | 2026-04-05 01:02:40.049666 | orchestrator | RUNNING HANDLER [keystone : Restart keystone container] ************************ 2026-04-05 01:02:40.049673 | orchestrator | Sunday 05 April 2026 01:01:31 +0000 (0:00:04.757) 0:01:44.640 ********** 2026-04-05 01:02:40.049808 | orchestrator | changed: [testbed-node-0] 2026-04-05 01:02:40.049820 | orchestrator | changed: [testbed-node-2] 2026-04-05 01:02:40.049827 | orchestrator | changed: [testbed-node-1] 2026-04-05 01:02:40.049834 | orchestrator | 2026-04-05 01:02:40.049842 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-04-05 01:02:40.049849 | orchestrator | Sunday 05 April 2026 01:01:37 +0000 (0:00:06.229) 0:01:50.870 ********** 2026-04-05 01:02:40.049876 | orchestrator | included: /ansible/roles/keystone/tasks/distribute_fernet.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 01:02:40.049884 | orchestrator | 2026-04-05 01:02:40.049892 | orchestrator | TASK [keystone : Waiting for Keystone SSH port to be UP] *********************** 2026-04-05 01:02:40.049899 | orchestrator | Sunday 05 April 2026 01:01:38 +0000 (0:00:00.546) 0:01:51.416 ********** 2026-04-05 01:02:40.049906 | orchestrator | ok: [testbed-node-1] 2026-04-05 01:02:40.049914 | orchestrator | ok: [testbed-node-0] 2026-04-05 01:02:40.049921 | orchestrator | ok: [testbed-node-2] 2026-04-05 01:02:40.049928 | orchestrator | 2026-04-05 01:02:40.049935 | orchestrator | TASK [keystone : Run key distribution] ***************************************** 2026-04-05 01:02:40.049943 | orchestrator | Sunday 05 April 2026 01:01:38 +0000 (0:00:00.740) 0:01:52.157 ********** 2026-04-05 01:02:40.049950 | orchestrator | changed: [testbed-node-0] 2026-04-05 01:02:40.049957 | orchestrator | 2026-04-05 01:02:40.049965 | orchestrator | TASK [keystone : Creating admin project, user, role, service, and endpoint] **** 2026-04-05 01:02:40.049972 | orchestrator | Sunday 05 April 2026 01:01:40 +0000 (0:00:01.781) 0:01:53.938 ********** 2026-04-05 01:02:40.049979 | orchestrator | changed: [testbed-node-0] => (item=RegionOne) 2026-04-05 01:02:40.049986 | orchestrator | 2026-04-05 01:02:40.049994 | orchestrator | TASK [service-ks-register : keystone | Creating services] ********************** 2026-04-05 01:02:40.050007 | orchestrator | Sunday 05 April 2026 01:01:53 +0000 (0:00:13.374) 0:02:07.312 ********** 2026-04-05 01:02:40.050014 | orchestrator | changed: [testbed-node-0] => (item=keystone (identity)) 2026-04-05 01:02:40.050045 | orchestrator | 2026-04-05 01:02:40.050052 | orchestrator | TASK [service-ks-register : keystone | Creating endpoints] ********************* 2026-04-05 01:02:40.050060 | orchestrator | Sunday 05 April 2026 01:02:21 +0000 (0:00:27.388) 0:02:34.700 ********** 2026-04-05 01:02:40.050067 | orchestrator | ok: [testbed-node-0] => (item=keystone -> https://api-int.testbed.osism.xyz:5000 -> internal) 2026-04-05 01:02:40.050074 | orchestrator | ok: [testbed-node-0] => (item=keystone -> https://api.testbed.osism.xyz:5000 -> public) 2026-04-05 01:02:40.050081 | orchestrator | 2026-04-05 01:02:40.050088 | orchestrator | TASK [service-ks-register : keystone | Creating projects] ********************** 2026-04-05 01:02:40.050096 | orchestrator | Sunday 05 April 2026 01:02:29 +0000 (0:00:08.164) 0:02:42.865 ********** 2026-04-05 01:02:40.050103 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:02:40.050110 | orchestrator | 2026-04-05 01:02:40.050117 | orchestrator | TASK [service-ks-register : keystone | Creating users] ************************* 2026-04-05 01:02:40.050124 | orchestrator | Sunday 05 April 2026 01:02:30 +0000 (0:00:00.510) 0:02:43.376 ********** 2026-04-05 01:02:40.050131 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:02:40.050138 | orchestrator | 2026-04-05 01:02:40.050146 | orchestrator | TASK [service-ks-register : keystone | Creating roles] ************************* 2026-04-05 01:02:40.050153 | orchestrator | Sunday 05 April 2026 01:02:30 +0000 (0:00:00.309) 0:02:43.686 ********** 2026-04-05 01:02:40.050160 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:02:40.050167 | orchestrator | 2026-04-05 01:02:40.050174 | orchestrator | TASK [service-ks-register : keystone | Granting user roles] ******************** 2026-04-05 01:02:40.050181 | orchestrator | Sunday 05 April 2026 01:02:30 +0000 (0:00:00.172) 0:02:43.858 ********** 2026-04-05 01:02:40.050189 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:02:40.050196 | orchestrator | 2026-04-05 01:02:40.050203 | orchestrator | TASK [keystone : Creating default user role] *********************************** 2026-04-05 01:02:40.050215 | orchestrator | Sunday 05 April 2026 01:02:31 +0000 (0:00:00.916) 0:02:44.775 ********** 2026-04-05 01:02:40.050222 | orchestrator | ok: [testbed-node-0] 2026-04-05 01:02:40.050229 | orchestrator | 2026-04-05 01:02:40.050237 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-04-05 01:02:40.050244 | orchestrator | Sunday 05 April 2026 01:02:35 +0000 (0:00:03.815) 0:02:48.590 ********** 2026-04-05 01:02:40.050251 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:02:40.050258 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:02:40.050266 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:02:40.050273 | orchestrator | 2026-04-05 01:02:40.050280 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-05 01:02:40.050287 | orchestrator | testbed-node-0 : ok=33  changed=19  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-04-05 01:02:40.050295 | orchestrator | testbed-node-1 : ok=22  changed=12  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-04-05 01:02:40.050303 | orchestrator | testbed-node-2 : ok=22  changed=12  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-04-05 01:02:40.050310 | orchestrator | 2026-04-05 01:02:40.050317 | orchestrator | 2026-04-05 01:02:40.050325 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-05 01:02:40.050332 | orchestrator | Sunday 05 April 2026 01:02:37 +0000 (0:00:02.364) 0:02:50.954 ********** 2026-04-05 01:02:40.050339 | orchestrator | =============================================================================== 2026-04-05 01:02:40.050346 | orchestrator | service-ks-register : keystone | Creating services --------------------- 27.39s 2026-04-05 01:02:40.050354 | orchestrator | keystone : Restart keystone-ssh container ------------------------------ 20.90s 2026-04-05 01:02:40.050361 | orchestrator | keystone : Running Keystone bootstrap container ------------------------ 16.32s 2026-04-05 01:02:40.050371 | orchestrator | keystone : Creating admin project, user, role, service, and endpoint --- 13.37s 2026-04-05 01:02:40.050384 | orchestrator | keystone : Running Keystone fernet bootstrap container ----------------- 12.71s 2026-04-05 01:02:40.050392 | orchestrator | keystone : Copying files for keystone-fernet ---------------------------- 9.31s 2026-04-05 01:02:40.050399 | orchestrator | service-ks-register : keystone | Creating endpoints --------------------- 8.16s 2026-04-05 01:02:40.050406 | orchestrator | keystone : Restart keystone container ----------------------------------- 6.23s 2026-04-05 01:02:40.050413 | orchestrator | keystone : Copying over keystone.conf ----------------------------------- 5.82s 2026-04-05 01:02:40.050421 | orchestrator | keystone : Restart keystone-fernet container ---------------------------- 4.76s 2026-04-05 01:02:40.050428 | orchestrator | keystone : Creating default user role ----------------------------------- 3.82s 2026-04-05 01:02:40.050435 | orchestrator | keystone : Copying over config.json files for services ------------------ 3.35s 2026-04-05 01:02:40.050442 | orchestrator | service-cert-copy : keystone | Copying over extra CA certificates ------- 3.28s 2026-04-05 01:02:40.050452 | orchestrator | keystone : Copying files for keystone-ssh ------------------------------- 2.64s 2026-04-05 01:02:40.050460 | orchestrator | keystone : Check keystone containers ------------------------------------ 2.43s 2026-04-05 01:02:40.050469 | orchestrator | keystone : Creating keystone database ----------------------------------- 2.42s 2026-04-05 01:02:40.050477 | orchestrator | keystone : include_tasks ------------------------------------------------ 2.36s 2026-04-05 01:02:40.050486 | orchestrator | keystone : Creating Keystone database user and setting permissions ------ 2.32s 2026-04-05 01:02:40.050495 | orchestrator | keystone : Ensuring config directories exist ---------------------------- 2.15s 2026-04-05 01:02:40.050503 | orchestrator | keystone : Run key distribution ----------------------------------------- 1.78s 2026-04-05 01:02:40.050511 | orchestrator | 2026-04-05 01:02:40 | INFO  | Task 93f2e915-87ca-427c-9e09-bf42bd2a4e4e is in state STARTED 2026-04-05 01:02:40.050520 | orchestrator | 2026-04-05 01:02:40 | INFO  | Task 8697bf9a-40eb-4fee-b8ee-ab5e6657c709 is in state STARTED 2026-04-05 01:02:40.050529 | orchestrator | 2026-04-05 01:02:40 | INFO  | Task 79a2f1f3-5b96-47a3-8ba3-1190a02e9ced is in state STARTED 2026-04-05 01:02:40.050538 | orchestrator | 2026-04-05 01:02:40 | INFO  | Task 4a2602ed-76cc-4a67-94e6-216d83a097ad is in state STARTED 2026-04-05 01:02:40.050547 | orchestrator | 2026-04-05 01:02:40 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:02:43.092592 | orchestrator | 2026-04-05 01:02:43 | INFO  | Task ed55e999-0d10-4489-891f-422b9f86068d is in state STARTED 2026-04-05 01:02:43.093194 | orchestrator | 2026-04-05 01:02:43 | INFO  | Task 93f2e915-87ca-427c-9e09-bf42bd2a4e4e is in state STARTED 2026-04-05 01:02:43.093990 | orchestrator | 2026-04-05 01:02:43 | INFO  | Task 8697bf9a-40eb-4fee-b8ee-ab5e6657c709 is in state STARTED 2026-04-05 01:02:43.094533 | orchestrator | 2026-04-05 01:02:43 | INFO  | Task 79a2f1f3-5b96-47a3-8ba3-1190a02e9ced is in state STARTED 2026-04-05 01:02:43.097133 | orchestrator | 2026-04-05 01:02:43 | INFO  | Task 4a2602ed-76cc-4a67-94e6-216d83a097ad is in state STARTED 2026-04-05 01:02:43.097198 | orchestrator | 2026-04-05 01:02:43 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:02:46.153315 | orchestrator | 2026-04-05 01:02:46 | INFO  | Task ed55e999-0d10-4489-891f-422b9f86068d is in state STARTED 2026-04-05 01:02:46.153655 | orchestrator | 2026-04-05 01:02:46 | INFO  | Task 93f2e915-87ca-427c-9e09-bf42bd2a4e4e is in state STARTED 2026-04-05 01:02:46.155399 | orchestrator | 2026-04-05 01:02:46 | INFO  | Task 8697bf9a-40eb-4fee-b8ee-ab5e6657c709 is in state STARTED 2026-04-05 01:02:46.156957 | orchestrator | 2026-04-05 01:02:46 | INFO  | Task 79a2f1f3-5b96-47a3-8ba3-1190a02e9ced is in state STARTED 2026-04-05 01:02:46.157804 | orchestrator | 2026-04-05 01:02:46 | INFO  | Task 4a2602ed-76cc-4a67-94e6-216d83a097ad is in state STARTED 2026-04-05 01:02:46.157992 | orchestrator | 2026-04-05 01:02:46 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:02:49.201079 | orchestrator | 2026-04-05 01:02:49 | INFO  | Task ed55e999-0d10-4489-891f-422b9f86068d is in state STARTED 2026-04-05 01:02:49.201331 | orchestrator | 2026-04-05 01:02:49 | INFO  | Task 93f2e915-87ca-427c-9e09-bf42bd2a4e4e is in state STARTED 2026-04-05 01:02:49.202713 | orchestrator | 2026-04-05 01:02:49 | INFO  | Task 8697bf9a-40eb-4fee-b8ee-ab5e6657c709 is in state STARTED 2026-04-05 01:02:49.204083 | orchestrator | 2026-04-05 01:02:49 | INFO  | Task 79a2f1f3-5b96-47a3-8ba3-1190a02e9ced is in state SUCCESS 2026-04-05 01:02:49.204125 | orchestrator | 2026-04-05 01:02:49 | INFO  | Task 4a2602ed-76cc-4a67-94e6-216d83a097ad is in state STARTED 2026-04-05 01:02:49.204134 | orchestrator | 2026-04-05 01:02:49 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:02:52.238461 | orchestrator | 2026-04-05 01:02:52 | INFO  | Task ed55e999-0d10-4489-891f-422b9f86068d is in state STARTED 2026-04-05 01:02:52.238556 | orchestrator | 2026-04-05 01:02:52 | INFO  | Task 93f2e915-87ca-427c-9e09-bf42bd2a4e4e is in state STARTED 2026-04-05 01:02:52.238664 | orchestrator | 2026-04-05 01:02:52 | INFO  | Task 8697bf9a-40eb-4fee-b8ee-ab5e6657c709 is in state SUCCESS 2026-04-05 01:02:52.239116 | orchestrator | 2026-04-05 01:02:52.239142 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-04-05 01:02:52.239153 | orchestrator | 2.16.14 2026-04-05 01:02:52.239164 | orchestrator | 2026-04-05 01:02:52.239174 | orchestrator | PLAY [Bootstraph ceph dashboard] *********************************************** 2026-04-05 01:02:52.239184 | orchestrator | 2026-04-05 01:02:52.239193 | orchestrator | TASK [Disable the ceph dashboard] ********************************************** 2026-04-05 01:02:52.239203 | orchestrator | Sunday 05 April 2026 01:02:00 +0000 (0:00:00.276) 0:00:00.276 ********** 2026-04-05 01:02:52.239213 | orchestrator | changed: [testbed-manager] 2026-04-05 01:02:52.239223 | orchestrator | 2026-04-05 01:02:52.239232 | orchestrator | TASK [Set mgr/dashboard/ssl to false] ****************************************** 2026-04-05 01:02:52.239242 | orchestrator | Sunday 05 April 2026 01:02:02 +0000 (0:00:01.723) 0:00:01.999 ********** 2026-04-05 01:02:52.239252 | orchestrator | changed: [testbed-manager] 2026-04-05 01:02:52.239261 | orchestrator | 2026-04-05 01:02:52.239270 | orchestrator | TASK [Set mgr/dashboard/server_port to 7000] *********************************** 2026-04-05 01:02:52.239280 | orchestrator | Sunday 05 April 2026 01:02:03 +0000 (0:00:01.083) 0:00:03.083 ********** 2026-04-05 01:02:52.239289 | orchestrator | changed: [testbed-manager] 2026-04-05 01:02:52.239300 | orchestrator | 2026-04-05 01:02:52.239310 | orchestrator | TASK [Set mgr/dashboard/server_addr to 0.0.0.0] ******************************** 2026-04-05 01:02:52.239320 | orchestrator | Sunday 05 April 2026 01:02:04 +0000 (0:00:01.225) 0:00:04.308 ********** 2026-04-05 01:02:52.239329 | orchestrator | changed: [testbed-manager] 2026-04-05 01:02:52.239339 | orchestrator | 2026-04-05 01:02:52.239348 | orchestrator | TASK [Set mgr/dashboard/standby_behaviour to error] **************************** 2026-04-05 01:02:52.239358 | orchestrator | Sunday 05 April 2026 01:02:06 +0000 (0:00:01.572) 0:00:05.881 ********** 2026-04-05 01:02:52.239367 | orchestrator | changed: [testbed-manager] 2026-04-05 01:02:52.239377 | orchestrator | 2026-04-05 01:02:52.239388 | orchestrator | TASK [Set mgr/dashboard/standby_error_status_code to 404] ********************** 2026-04-05 01:02:52.239404 | orchestrator | Sunday 05 April 2026 01:02:07 +0000 (0:00:01.266) 0:00:07.148 ********** 2026-04-05 01:02:52.239420 | orchestrator | changed: [testbed-manager] 2026-04-05 01:02:52.239436 | orchestrator | 2026-04-05 01:02:52.239452 | orchestrator | TASK [Enable the ceph dashboard] *********************************************** 2026-04-05 01:02:52.239468 | orchestrator | Sunday 05 April 2026 01:02:08 +0000 (0:00:01.052) 0:00:08.200 ********** 2026-04-05 01:02:52.239510 | orchestrator | changed: [testbed-manager] 2026-04-05 01:02:52.239521 | orchestrator | 2026-04-05 01:02:52.239531 | orchestrator | TASK [Write ceph_dashboard_password to temporary file] ************************* 2026-04-05 01:02:52.239541 | orchestrator | Sunday 05 April 2026 01:02:10 +0000 (0:00:02.235) 0:00:10.436 ********** 2026-04-05 01:02:52.239550 | orchestrator | changed: [testbed-manager] 2026-04-05 01:02:52.239559 | orchestrator | 2026-04-05 01:02:52.239569 | orchestrator | TASK [Create admin user] ******************************************************* 2026-04-05 01:02:52.239579 | orchestrator | Sunday 05 April 2026 01:02:12 +0000 (0:00:01.306) 0:00:11.743 ********** 2026-04-05 01:02:52.239588 | orchestrator | changed: [testbed-manager] 2026-04-05 01:02:52.239598 | orchestrator | 2026-04-05 01:02:52.239607 | orchestrator | TASK [Remove temporary file for ceph_dashboard_password] *********************** 2026-04-05 01:02:52.239617 | orchestrator | Sunday 05 April 2026 01:02:22 +0000 (0:00:10.883) 0:00:22.627 ********** 2026-04-05 01:02:52.239626 | orchestrator | skipping: [testbed-manager] 2026-04-05 01:02:52.239635 | orchestrator | 2026-04-05 01:02:52.239645 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2026-04-05 01:02:52.239654 | orchestrator | 2026-04-05 01:02:52.239664 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2026-04-05 01:02:52.239688 | orchestrator | Sunday 05 April 2026 01:02:23 +0000 (0:00:00.176) 0:00:22.803 ********** 2026-04-05 01:02:52.239698 | orchestrator | changed: [testbed-node-0] 2026-04-05 01:02:52.239707 | orchestrator | 2026-04-05 01:02:52.239717 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2026-04-05 01:02:52.239726 | orchestrator | 2026-04-05 01:02:52.239739 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2026-04-05 01:02:52.239750 | orchestrator | Sunday 05 April 2026 01:02:35 +0000 (0:00:12.085) 0:00:34.889 ********** 2026-04-05 01:02:52.239763 | orchestrator | changed: [testbed-node-1] 2026-04-05 01:02:52.239774 | orchestrator | 2026-04-05 01:02:52.239786 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2026-04-05 01:02:52.239797 | orchestrator | 2026-04-05 01:02:52.239808 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2026-04-05 01:02:52.239819 | orchestrator | Sunday 05 April 2026 01:02:46 +0000 (0:00:11.531) 0:00:46.421 ********** 2026-04-05 01:02:52.239830 | orchestrator | changed: [testbed-node-2] 2026-04-05 01:02:52.239842 | orchestrator | 2026-04-05 01:02:52.239882 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-05 01:02:52.239895 | orchestrator | testbed-manager : ok=9  changed=9  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-05 01:02:52.239909 | orchestrator | testbed-node-0 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-05 01:02:52.239921 | orchestrator | testbed-node-1 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-05 01:02:52.239932 | orchestrator | testbed-node-2 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-05 01:02:52.239944 | orchestrator | 2026-04-05 01:02:52.239955 | orchestrator | 2026-04-05 01:02:52.239966 | orchestrator | 2026-04-05 01:02:52.239978 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-05 01:02:52.239989 | orchestrator | Sunday 05 April 2026 01:02:48 +0000 (0:00:01.544) 0:00:47.966 ********** 2026-04-05 01:02:52.240001 | orchestrator | =============================================================================== 2026-04-05 01:02:52.240013 | orchestrator | Restart ceph manager service ------------------------------------------- 25.16s 2026-04-05 01:02:52.240036 | orchestrator | Create admin user ------------------------------------------------------ 10.88s 2026-04-05 01:02:52.240048 | orchestrator | Enable the ceph dashboard ----------------------------------------------- 2.24s 2026-04-05 01:02:52.240059 | orchestrator | Disable the ceph dashboard ---------------------------------------------- 1.72s 2026-04-05 01:02:52.240079 | orchestrator | Set mgr/dashboard/server_addr to 0.0.0.0 -------------------------------- 1.57s 2026-04-05 01:02:52.240091 | orchestrator | Write ceph_dashboard_password to temporary file ------------------------- 1.31s 2026-04-05 01:02:52.240103 | orchestrator | Set mgr/dashboard/standby_behaviour to error ---------------------------- 1.27s 2026-04-05 01:02:52.240114 | orchestrator | Set mgr/dashboard/server_port to 7000 ----------------------------------- 1.23s 2026-04-05 01:02:52.240124 | orchestrator | Set mgr/dashboard/ssl to false ------------------------------------------ 1.08s 2026-04-05 01:02:52.240133 | orchestrator | Set mgr/dashboard/standby_error_status_code to 404 ---------------------- 1.05s 2026-04-05 01:02:52.240142 | orchestrator | Remove temporary file for ceph_dashboard_password ----------------------- 0.18s 2026-04-05 01:02:52.240152 | orchestrator | 2026-04-05 01:02:52.240161 | orchestrator | 2026-04-05 01:02:52.240171 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-05 01:02:52.240180 | orchestrator | 2026-04-05 01:02:52.240190 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-05 01:02:52.240199 | orchestrator | Sunday 05 April 2026 01:02:07 +0000 (0:00:00.410) 0:00:00.410 ********** 2026-04-05 01:02:52.240209 | orchestrator | ok: [testbed-node-0] 2026-04-05 01:02:52.240218 | orchestrator | ok: [testbed-node-1] 2026-04-05 01:02:52.240228 | orchestrator | ok: [testbed-node-2] 2026-04-05 01:02:52.240237 | orchestrator | ok: [testbed-node-3] 2026-04-05 01:02:52.240246 | orchestrator | ok: [testbed-node-4] 2026-04-05 01:02:52.240255 | orchestrator | ok: [testbed-node-5] 2026-04-05 01:02:52.240265 | orchestrator | ok: [testbed-manager] 2026-04-05 01:02:52.240274 | orchestrator | 2026-04-05 01:02:52.240284 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-05 01:02:52.240294 | orchestrator | Sunday 05 April 2026 01:02:08 +0000 (0:00:00.837) 0:00:01.248 ********** 2026-04-05 01:02:52.240303 | orchestrator | ok: [testbed-node-0] => (item=enable_ceph_rgw_True) 2026-04-05 01:02:52.240314 | orchestrator | ok: [testbed-node-1] => (item=enable_ceph_rgw_True) 2026-04-05 01:02:52.240323 | orchestrator | ok: [testbed-node-2] => (item=enable_ceph_rgw_True) 2026-04-05 01:02:52.240332 | orchestrator | ok: [testbed-node-3] => (item=enable_ceph_rgw_True) 2026-04-05 01:02:52.240342 | orchestrator | ok: [testbed-node-4] => (item=enable_ceph_rgw_True) 2026-04-05 01:02:52.240351 | orchestrator | ok: [testbed-node-5] => (item=enable_ceph_rgw_True) 2026-04-05 01:02:52.240360 | orchestrator | ok: [testbed-manager] => (item=enable_ceph_rgw_True) 2026-04-05 01:02:52.240370 | orchestrator | 2026-04-05 01:02:52.240379 | orchestrator | PLAY [Apply role ceph-rgw] ***************************************************** 2026-04-05 01:02:52.240389 | orchestrator | 2026-04-05 01:02:52.240399 | orchestrator | TASK [ceph-rgw : include_tasks] ************************************************ 2026-04-05 01:02:52.240408 | orchestrator | Sunday 05 April 2026 01:02:09 +0000 (0:00:01.136) 0:00:02.384 ********** 2026-04-05 01:02:52.240418 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager 2026-04-05 01:02:52.240428 | orchestrator | 2026-04-05 01:02:52.240438 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating services] ********************** 2026-04-05 01:02:52.240453 | orchestrator | Sunday 05 April 2026 01:02:11 +0000 (0:00:02.192) 0:00:04.577 ********** 2026-04-05 01:02:52.240463 | orchestrator | changed: [testbed-node-0] => (item=swift (object-store)) 2026-04-05 01:02:52.240472 | orchestrator | 2026-04-05 01:02:52.240487 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating endpoints] ********************* 2026-04-05 01:02:52.240502 | orchestrator | Sunday 05 April 2026 01:02:22 +0000 (0:00:10.452) 0:00:15.029 ********** 2026-04-05 01:02:52.240520 | orchestrator | changed: [testbed-node-0] => (item=swift -> https://api-int.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s -> internal) 2026-04-05 01:02:52.240547 | orchestrator | changed: [testbed-node-0] => (item=swift -> https://api.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s -> public) 2026-04-05 01:02:52.240566 | orchestrator | 2026-04-05 01:02:52.240593 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating projects] ********************** 2026-04-05 01:02:52.240609 | orchestrator | Sunday 05 April 2026 01:02:29 +0000 (0:00:07.633) 0:00:22.663 ********** 2026-04-05 01:02:52.240625 | orchestrator | changed: [testbed-node-0] => (item=service) 2026-04-05 01:02:52.240639 | orchestrator | 2026-04-05 01:02:52.240653 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating users] ************************* 2026-04-05 01:02:52.240668 | orchestrator | Sunday 05 April 2026 01:02:33 +0000 (0:00:03.867) 0:00:26.530 ********** 2026-04-05 01:02:52.240683 | orchestrator | changed: [testbed-node-0] => (item=ceph_rgw -> service) 2026-04-05 01:02:52.240965 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-04-05 01:02:52.240991 | orchestrator | 2026-04-05 01:02:52.241007 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating roles] ************************* 2026-04-05 01:02:52.241023 | orchestrator | Sunday 05 April 2026 01:02:38 +0000 (0:00:04.770) 0:00:31.301 ********** 2026-04-05 01:02:52.241038 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-04-05 01:02:52.241056 | orchestrator | changed: [testbed-node-0] => (item=ResellerAdmin) 2026-04-05 01:02:52.241070 | orchestrator | 2026-04-05 01:02:52.241080 | orchestrator | TASK [service-ks-register : ceph-rgw | Granting user roles] ******************** 2026-04-05 01:02:52.241089 | orchestrator | Sunday 05 April 2026 01:02:45 +0000 (0:00:07.037) 0:00:38.338 ********** 2026-04-05 01:02:52.241097 | orchestrator | changed: [testbed-node-0] => (item=ceph_rgw -> service -> admin) 2026-04-05 01:02:52.241105 | orchestrator | 2026-04-05 01:02:52.241113 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-05 01:02:52.241130 | orchestrator | testbed-manager : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-05 01:02:52.241139 | orchestrator | testbed-node-0 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-05 01:02:52.241148 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-05 01:02:52.241156 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-05 01:02:52.241164 | orchestrator | testbed-node-3 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-05 01:02:52.241172 | orchestrator | testbed-node-4 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-05 01:02:52.241180 | orchestrator | testbed-node-5 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-05 01:02:52.241187 | orchestrator | 2026-04-05 01:02:52.241195 | orchestrator | 2026-04-05 01:02:52.241203 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-05 01:02:52.241211 | orchestrator | Sunday 05 April 2026 01:02:51 +0000 (0:00:05.658) 0:00:43.996 ********** 2026-04-05 01:02:52.241219 | orchestrator | =============================================================================== 2026-04-05 01:02:52.241227 | orchestrator | service-ks-register : ceph-rgw | Creating services --------------------- 10.45s 2026-04-05 01:02:52.241235 | orchestrator | service-ks-register : ceph-rgw | Creating endpoints --------------------- 7.63s 2026-04-05 01:02:52.241242 | orchestrator | service-ks-register : ceph-rgw | Creating roles ------------------------- 7.04s 2026-04-05 01:02:52.241250 | orchestrator | service-ks-register : ceph-rgw | Granting user roles -------------------- 5.66s 2026-04-05 01:02:52.241258 | orchestrator | service-ks-register : ceph-rgw | Creating users ------------------------- 4.77s 2026-04-05 01:02:52.241265 | orchestrator | service-ks-register : ceph-rgw | Creating projects ---------------------- 3.87s 2026-04-05 01:02:52.241273 | orchestrator | ceph-rgw : include_tasks ------------------------------------------------ 2.19s 2026-04-05 01:02:52.241281 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.14s 2026-04-05 01:02:52.241299 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.84s 2026-04-05 01:02:52.241307 | orchestrator | 2026-04-05 01:02:52 | INFO  | Task 4a2602ed-76cc-4a67-94e6-216d83a097ad is in state STARTED 2026-04-05 01:02:52.241315 | orchestrator | 2026-04-05 01:02:52 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:02:55.278325 | orchestrator | 2026-04-05 01:02:55 | INFO  | Task ed55e999-0d10-4489-891f-422b9f86068d is in state STARTED 2026-04-05 01:02:55.278428 | orchestrator | 2026-04-05 01:02:55 | INFO  | Task b5e2ef28-6577-4f62-94bc-3cb1b5248189 is in state STARTED 2026-04-05 01:02:55.279467 | orchestrator | 2026-04-05 01:02:55 | INFO  | Task 93f2e915-87ca-427c-9e09-bf42bd2a4e4e is in state STARTED 2026-04-05 01:02:55.283213 | orchestrator | 2026-04-05 01:02:55 | INFO  | Task 4a2602ed-76cc-4a67-94e6-216d83a097ad is in state STARTED 2026-04-05 01:02:55.283240 | orchestrator | 2026-04-05 01:02:55 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:02:58.317418 | orchestrator | 2026-04-05 01:02:58 | INFO  | Task ed55e999-0d10-4489-891f-422b9f86068d is in state STARTED 2026-04-05 01:02:58.319648 | orchestrator | 2026-04-05 01:02:58 | INFO  | Task b5e2ef28-6577-4f62-94bc-3cb1b5248189 is in state STARTED 2026-04-05 01:02:58.321830 | orchestrator | 2026-04-05 01:02:58 | INFO  | Task 93f2e915-87ca-427c-9e09-bf42bd2a4e4e is in state STARTED 2026-04-05 01:02:58.323939 | orchestrator | 2026-04-05 01:02:58 | INFO  | Task 4a2602ed-76cc-4a67-94e6-216d83a097ad is in state STARTED 2026-04-05 01:02:58.323980 | orchestrator | 2026-04-05 01:02:58 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:03:01.360918 | orchestrator | 2026-04-05 01:03:01 | INFO  | Task ed55e999-0d10-4489-891f-422b9f86068d is in state STARTED 2026-04-05 01:03:01.361033 | orchestrator | 2026-04-05 01:03:01 | INFO  | Task b5e2ef28-6577-4f62-94bc-3cb1b5248189 is in state STARTED 2026-04-05 01:03:01.361906 | orchestrator | 2026-04-05 01:03:01 | INFO  | Task 93f2e915-87ca-427c-9e09-bf42bd2a4e4e is in state STARTED 2026-04-05 01:03:01.365618 | orchestrator | 2026-04-05 01:03:01 | INFO  | Task 4a2602ed-76cc-4a67-94e6-216d83a097ad is in state STARTED 2026-04-05 01:03:01.365675 | orchestrator | 2026-04-05 01:03:01 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:03:04.416302 | orchestrator | 2026-04-05 01:03:04 | INFO  | Task ed55e999-0d10-4489-891f-422b9f86068d is in state STARTED 2026-04-05 01:03:04.416794 | orchestrator | 2026-04-05 01:03:04 | INFO  | Task b5e2ef28-6577-4f62-94bc-3cb1b5248189 is in state STARTED 2026-04-05 01:03:04.417720 | orchestrator | 2026-04-05 01:03:04 | INFO  | Task 93f2e915-87ca-427c-9e09-bf42bd2a4e4e is in state STARTED 2026-04-05 01:03:04.426951 | orchestrator | 2026-04-05 01:03:04 | INFO  | Task 4a2602ed-76cc-4a67-94e6-216d83a097ad is in state STARTED 2026-04-05 01:03:04.427030 | orchestrator | 2026-04-05 01:03:04 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:03:07.457103 | orchestrator | 2026-04-05 01:03:07 | INFO  | Task ed55e999-0d10-4489-891f-422b9f86068d is in state STARTED 2026-04-05 01:03:07.457363 | orchestrator | 2026-04-05 01:03:07 | INFO  | Task b5e2ef28-6577-4f62-94bc-3cb1b5248189 is in state STARTED 2026-04-05 01:03:07.458260 | orchestrator | 2026-04-05 01:03:07 | INFO  | Task 93f2e915-87ca-427c-9e09-bf42bd2a4e4e is in state STARTED 2026-04-05 01:03:07.459092 | orchestrator | 2026-04-05 01:03:07 | INFO  | Task 4a2602ed-76cc-4a67-94e6-216d83a097ad is in state STARTED 2026-04-05 01:03:07.459126 | orchestrator | 2026-04-05 01:03:07 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:03:10.508344 | orchestrator | 2026-04-05 01:03:10 | INFO  | Task ed55e999-0d10-4489-891f-422b9f86068d is in state STARTED 2026-04-05 01:03:10.508481 | orchestrator | 2026-04-05 01:03:10 | INFO  | Task b5e2ef28-6577-4f62-94bc-3cb1b5248189 is in state STARTED 2026-04-05 01:03:10.509171 | orchestrator | 2026-04-05 01:03:10 | INFO  | Task 93f2e915-87ca-427c-9e09-bf42bd2a4e4e is in state STARTED 2026-04-05 01:03:10.509764 | orchestrator | 2026-04-05 01:03:10 | INFO  | Task 4a2602ed-76cc-4a67-94e6-216d83a097ad is in state STARTED 2026-04-05 01:03:10.509943 | orchestrator | 2026-04-05 01:03:10 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:03:13.545392 | orchestrator | 2026-04-05 01:03:13 | INFO  | Task ed55e999-0d10-4489-891f-422b9f86068d is in state STARTED 2026-04-05 01:03:13.546645 | orchestrator | 2026-04-05 01:03:13 | INFO  | Task b5e2ef28-6577-4f62-94bc-3cb1b5248189 is in state STARTED 2026-04-05 01:03:13.548182 | orchestrator | 2026-04-05 01:03:13 | INFO  | Task 93f2e915-87ca-427c-9e09-bf42bd2a4e4e is in state STARTED 2026-04-05 01:03:13.550520 | orchestrator | 2026-04-05 01:03:13 | INFO  | Task 4a2602ed-76cc-4a67-94e6-216d83a097ad is in state STARTED 2026-04-05 01:03:13.550574 | orchestrator | 2026-04-05 01:03:13 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:03:16.577188 | orchestrator | 2026-04-05 01:03:16 | INFO  | Task ed55e999-0d10-4489-891f-422b9f86068d is in state STARTED 2026-04-05 01:03:16.577406 | orchestrator | 2026-04-05 01:03:16 | INFO  | Task b5e2ef28-6577-4f62-94bc-3cb1b5248189 is in state STARTED 2026-04-05 01:03:16.578078 | orchestrator | 2026-04-05 01:03:16 | INFO  | Task 93f2e915-87ca-427c-9e09-bf42bd2a4e4e is in state STARTED 2026-04-05 01:03:16.578710 | orchestrator | 2026-04-05 01:03:16 | INFO  | Task 4a2602ed-76cc-4a67-94e6-216d83a097ad is in state STARTED 2026-04-05 01:03:16.578719 | orchestrator | 2026-04-05 01:03:16 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:03:19.608111 | orchestrator | 2026-04-05 01:03:19 | INFO  | Task ed55e999-0d10-4489-891f-422b9f86068d is in state STARTED 2026-04-05 01:03:19.608485 | orchestrator | 2026-04-05 01:03:19 | INFO  | Task b5e2ef28-6577-4f62-94bc-3cb1b5248189 is in state STARTED 2026-04-05 01:03:19.609518 | orchestrator | 2026-04-05 01:03:19 | INFO  | Task 93f2e915-87ca-427c-9e09-bf42bd2a4e4e is in state STARTED 2026-04-05 01:03:19.611782 | orchestrator | 2026-04-05 01:03:19 | INFO  | Task 4a2602ed-76cc-4a67-94e6-216d83a097ad is in state STARTED 2026-04-05 01:03:19.611860 | orchestrator | 2026-04-05 01:03:19 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:03:22.644634 | orchestrator | 2026-04-05 01:03:22 | INFO  | Task ed55e999-0d10-4489-891f-422b9f86068d is in state STARTED 2026-04-05 01:03:22.645245 | orchestrator | 2026-04-05 01:03:22 | INFO  | Task b5e2ef28-6577-4f62-94bc-3cb1b5248189 is in state STARTED 2026-04-05 01:03:22.646004 | orchestrator | 2026-04-05 01:03:22 | INFO  | Task 93f2e915-87ca-427c-9e09-bf42bd2a4e4e is in state STARTED 2026-04-05 01:03:22.646627 | orchestrator | 2026-04-05 01:03:22 | INFO  | Task 4a2602ed-76cc-4a67-94e6-216d83a097ad is in state STARTED 2026-04-05 01:03:22.646661 | orchestrator | 2026-04-05 01:03:22 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:03:25.673982 | orchestrator | 2026-04-05 01:03:25 | INFO  | Task ed55e999-0d10-4489-891f-422b9f86068d is in state STARTED 2026-04-05 01:03:25.674305 | orchestrator | 2026-04-05 01:03:25 | INFO  | Task b5e2ef28-6577-4f62-94bc-3cb1b5248189 is in state STARTED 2026-04-05 01:03:25.675061 | orchestrator | 2026-04-05 01:03:25 | INFO  | Task 93f2e915-87ca-427c-9e09-bf42bd2a4e4e is in state STARTED 2026-04-05 01:03:25.675703 | orchestrator | 2026-04-05 01:03:25 | INFO  | Task 4a2602ed-76cc-4a67-94e6-216d83a097ad is in state STARTED 2026-04-05 01:03:25.675733 | orchestrator | 2026-04-05 01:03:25 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:03:28.712669 | orchestrator | 2026-04-05 01:03:28 | INFO  | Task ed55e999-0d10-4489-891f-422b9f86068d is in state STARTED 2026-04-05 01:03:28.713037 | orchestrator | 2026-04-05 01:03:28 | INFO  | Task b5e2ef28-6577-4f62-94bc-3cb1b5248189 is in state STARTED 2026-04-05 01:03:28.714764 | orchestrator | 2026-04-05 01:03:28 | INFO  | Task 93f2e915-87ca-427c-9e09-bf42bd2a4e4e is in state STARTED 2026-04-05 01:03:28.715372 | orchestrator | 2026-04-05 01:03:28 | INFO  | Task 4a2602ed-76cc-4a67-94e6-216d83a097ad is in state STARTED 2026-04-05 01:03:28.715506 | orchestrator | 2026-04-05 01:03:28 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:03:31.744488 | orchestrator | 2026-04-05 01:03:31 | INFO  | Task ed55e999-0d10-4489-891f-422b9f86068d is in state STARTED 2026-04-05 01:03:31.744586 | orchestrator | 2026-04-05 01:03:31 | INFO  | Task b5e2ef28-6577-4f62-94bc-3cb1b5248189 is in state STARTED 2026-04-05 01:03:31.745228 | orchestrator | 2026-04-05 01:03:31 | INFO  | Task 93f2e915-87ca-427c-9e09-bf42bd2a4e4e is in state STARTED 2026-04-05 01:03:31.745992 | orchestrator | 2026-04-05 01:03:31 | INFO  | Task 4a2602ed-76cc-4a67-94e6-216d83a097ad is in state STARTED 2026-04-05 01:03:31.746095 | orchestrator | 2026-04-05 01:03:31 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:03:34.780509 | orchestrator | 2026-04-05 01:03:34 | INFO  | Task ed55e999-0d10-4489-891f-422b9f86068d is in state STARTED 2026-04-05 01:03:34.780947 | orchestrator | 2026-04-05 01:03:34 | INFO  | Task b5e2ef28-6577-4f62-94bc-3cb1b5248189 is in state STARTED 2026-04-05 01:03:34.781595 | orchestrator | 2026-04-05 01:03:34 | INFO  | Task 93f2e915-87ca-427c-9e09-bf42bd2a4e4e is in state STARTED 2026-04-05 01:03:34.782361 | orchestrator | 2026-04-05 01:03:34 | INFO  | Task 4a2602ed-76cc-4a67-94e6-216d83a097ad is in state STARTED 2026-04-05 01:03:34.782377 | orchestrator | 2026-04-05 01:03:34 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:03:37.819207 | orchestrator | 2026-04-05 01:03:37 | INFO  | Task ed55e999-0d10-4489-891f-422b9f86068d is in state STARTED 2026-04-05 01:03:37.819295 | orchestrator | 2026-04-05 01:03:37 | INFO  | Task b5e2ef28-6577-4f62-94bc-3cb1b5248189 is in state STARTED 2026-04-05 01:03:37.819316 | orchestrator | 2026-04-05 01:03:37 | INFO  | Task 93f2e915-87ca-427c-9e09-bf42bd2a4e4e is in state STARTED 2026-04-05 01:03:37.820132 | orchestrator | 2026-04-05 01:03:37 | INFO  | Task 4a2602ed-76cc-4a67-94e6-216d83a097ad is in state STARTED 2026-04-05 01:03:37.820159 | orchestrator | 2026-04-05 01:03:37 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:03:40.864181 | orchestrator | 2026-04-05 01:03:40 | INFO  | Task ed55e999-0d10-4489-891f-422b9f86068d is in state STARTED 2026-04-05 01:03:40.864278 | orchestrator | 2026-04-05 01:03:40 | INFO  | Task b5e2ef28-6577-4f62-94bc-3cb1b5248189 is in state STARTED 2026-04-05 01:03:40.864540 | orchestrator | 2026-04-05 01:03:40 | INFO  | Task 93f2e915-87ca-427c-9e09-bf42bd2a4e4e is in state STARTED 2026-04-05 01:03:40.865546 | orchestrator | 2026-04-05 01:03:40 | INFO  | Task 4a2602ed-76cc-4a67-94e6-216d83a097ad is in state STARTED 2026-04-05 01:03:40.865836 | orchestrator | 2026-04-05 01:03:40 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:03:43.897377 | orchestrator | 2026-04-05 01:03:43 | INFO  | Task ed55e999-0d10-4489-891f-422b9f86068d is in state STARTED 2026-04-05 01:03:43.897502 | orchestrator | 2026-04-05 01:03:43 | INFO  | Task b5e2ef28-6577-4f62-94bc-3cb1b5248189 is in state STARTED 2026-04-05 01:03:43.899111 | orchestrator | 2026-04-05 01:03:43 | INFO  | Task 93f2e915-87ca-427c-9e09-bf42bd2a4e4e is in state STARTED 2026-04-05 01:03:43.900049 | orchestrator | 2026-04-05 01:03:43 | INFO  | Task 4a2602ed-76cc-4a67-94e6-216d83a097ad is in state STARTED 2026-04-05 01:03:43.900087 | orchestrator | 2026-04-05 01:03:43 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:03:46.933936 | orchestrator | 2026-04-05 01:03:46 | INFO  | Task ed55e999-0d10-4489-891f-422b9f86068d is in state STARTED 2026-04-05 01:03:46.935859 | orchestrator | 2026-04-05 01:03:46 | INFO  | Task b5e2ef28-6577-4f62-94bc-3cb1b5248189 is in state STARTED 2026-04-05 01:03:46.937630 | orchestrator | 2026-04-05 01:03:46 | INFO  | Task 93f2e915-87ca-427c-9e09-bf42bd2a4e4e is in state STARTED 2026-04-05 01:03:46.939633 | orchestrator | 2026-04-05 01:03:46 | INFO  | Task 4a2602ed-76cc-4a67-94e6-216d83a097ad is in state STARTED 2026-04-05 01:03:46.939664 | orchestrator | 2026-04-05 01:03:46 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:03:49.986462 | orchestrator | 2026-04-05 01:03:49 | INFO  | Task ed55e999-0d10-4489-891f-422b9f86068d is in state STARTED 2026-04-05 01:03:49.988171 | orchestrator | 2026-04-05 01:03:49 | INFO  | Task b5e2ef28-6577-4f62-94bc-3cb1b5248189 is in state STARTED 2026-04-05 01:03:49.988275 | orchestrator | 2026-04-05 01:03:49 | INFO  | Task 93f2e915-87ca-427c-9e09-bf42bd2a4e4e is in state STARTED 2026-04-05 01:03:49.989128 | orchestrator | 2026-04-05 01:03:49 | INFO  | Task 4a2602ed-76cc-4a67-94e6-216d83a097ad is in state STARTED 2026-04-05 01:03:49.989157 | orchestrator | 2026-04-05 01:03:49 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:03:53.040243 | orchestrator | 2026-04-05 01:03:53 | INFO  | Task ed55e999-0d10-4489-891f-422b9f86068d is in state STARTED 2026-04-05 01:03:53.042192 | orchestrator | 2026-04-05 01:03:53 | INFO  | Task b5e2ef28-6577-4f62-94bc-3cb1b5248189 is in state STARTED 2026-04-05 01:03:53.043224 | orchestrator | 2026-04-05 01:03:53 | INFO  | Task 93f2e915-87ca-427c-9e09-bf42bd2a4e4e is in state STARTED 2026-04-05 01:03:53.044736 | orchestrator | 2026-04-05 01:03:53 | INFO  | Task 4a2602ed-76cc-4a67-94e6-216d83a097ad is in state STARTED 2026-04-05 01:03:53.044775 | orchestrator | 2026-04-05 01:03:53 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:03:56.163703 | orchestrator | 2026-04-05 01:03:56 | INFO  | Task ed55e999-0d10-4489-891f-422b9f86068d is in state STARTED 2026-04-05 01:03:56.163831 | orchestrator | 2026-04-05 01:03:56 | INFO  | Task b5e2ef28-6577-4f62-94bc-3cb1b5248189 is in state STARTED 2026-04-05 01:03:56.163912 | orchestrator | 2026-04-05 01:03:56 | INFO  | Task 93f2e915-87ca-427c-9e09-bf42bd2a4e4e is in state STARTED 2026-04-05 01:03:56.163927 | orchestrator | 2026-04-05 01:03:56 | INFO  | Task 4a2602ed-76cc-4a67-94e6-216d83a097ad is in state STARTED 2026-04-05 01:03:56.163940 | orchestrator | 2026-04-05 01:03:56 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:03:59.173153 | orchestrator | 2026-04-05 01:03:59 | INFO  | Task ed55e999-0d10-4489-891f-422b9f86068d is in state STARTED 2026-04-05 01:03:59.174279 | orchestrator | 2026-04-05 01:03:59 | INFO  | Task b5e2ef28-6577-4f62-94bc-3cb1b5248189 is in state STARTED 2026-04-05 01:03:59.174543 | orchestrator | 2026-04-05 01:03:59 | INFO  | Task 93f2e915-87ca-427c-9e09-bf42bd2a4e4e is in state STARTED 2026-04-05 01:03:59.175595 | orchestrator | 2026-04-05 01:03:59 | INFO  | Task 4a2602ed-76cc-4a67-94e6-216d83a097ad is in state STARTED 2026-04-05 01:03:59.175640 | orchestrator | 2026-04-05 01:03:59 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:04:02.221416 | orchestrator | 2026-04-05 01:04:02 | INFO  | Task ed55e999-0d10-4489-891f-422b9f86068d is in state STARTED 2026-04-05 01:04:02.222402 | orchestrator | 2026-04-05 01:04:02 | INFO  | Task b5e2ef28-6577-4f62-94bc-3cb1b5248189 is in state STARTED 2026-04-05 01:04:02.223077 | orchestrator | 2026-04-05 01:04:02 | INFO  | Task 93f2e915-87ca-427c-9e09-bf42bd2a4e4e is in state STARTED 2026-04-05 01:04:02.223803 | orchestrator | 2026-04-05 01:04:02 | INFO  | Task 4a2602ed-76cc-4a67-94e6-216d83a097ad is in state STARTED 2026-04-05 01:04:02.223837 | orchestrator | 2026-04-05 01:04:02 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:04:05.257091 | orchestrator | 2026-04-05 01:04:05 | INFO  | Task ed55e999-0d10-4489-891f-422b9f86068d is in state STARTED 2026-04-05 01:04:05.257176 | orchestrator | 2026-04-05 01:04:05 | INFO  | Task b5e2ef28-6577-4f62-94bc-3cb1b5248189 is in state STARTED 2026-04-05 01:04:05.257202 | orchestrator | 2026-04-05 01:04:05 | INFO  | Task 93f2e915-87ca-427c-9e09-bf42bd2a4e4e is in state STARTED 2026-04-05 01:04:05.257602 | orchestrator | 2026-04-05 01:04:05 | INFO  | Task 4a2602ed-76cc-4a67-94e6-216d83a097ad is in state STARTED 2026-04-05 01:04:05.257633 | orchestrator | 2026-04-05 01:04:05 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:04:08.285106 | orchestrator | 2026-04-05 01:04:08 | INFO  | Task ed55e999-0d10-4489-891f-422b9f86068d is in state STARTED 2026-04-05 01:04:08.285166 | orchestrator | 2026-04-05 01:04:08 | INFO  | Task b5e2ef28-6577-4f62-94bc-3cb1b5248189 is in state STARTED 2026-04-05 01:04:08.285308 | orchestrator | 2026-04-05 01:04:08 | INFO  | Task 93f2e915-87ca-427c-9e09-bf42bd2a4e4e is in state STARTED 2026-04-05 01:04:08.289240 | orchestrator | 2026-04-05 01:04:08 | INFO  | Task 4a2602ed-76cc-4a67-94e6-216d83a097ad is in state STARTED 2026-04-05 01:04:08.289287 | orchestrator | 2026-04-05 01:04:08 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:04:11.333541 | orchestrator | 2026-04-05 01:04:11 | INFO  | Task ed55e999-0d10-4489-891f-422b9f86068d is in state STARTED 2026-04-05 01:04:11.333603 | orchestrator | 2026-04-05 01:04:11 | INFO  | Task b5e2ef28-6577-4f62-94bc-3cb1b5248189 is in state STARTED 2026-04-05 01:04:11.333968 | orchestrator | 2026-04-05 01:04:11 | INFO  | Task 93f2e915-87ca-427c-9e09-bf42bd2a4e4e is in state STARTED 2026-04-05 01:04:11.336120 | orchestrator | 2026-04-05 01:04:11 | INFO  | Task 4a2602ed-76cc-4a67-94e6-216d83a097ad is in state STARTED 2026-04-05 01:04:11.336485 | orchestrator | 2026-04-05 01:04:11 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:04:14.364996 | orchestrator | 2026-04-05 01:04:14 | INFO  | Task ed55e999-0d10-4489-891f-422b9f86068d is in state STARTED 2026-04-05 01:04:14.365732 | orchestrator | 2026-04-05 01:04:14 | INFO  | Task b5e2ef28-6577-4f62-94bc-3cb1b5248189 is in state STARTED 2026-04-05 01:04:14.367682 | orchestrator | 2026-04-05 01:04:14 | INFO  | Task 93f2e915-87ca-427c-9e09-bf42bd2a4e4e is in state STARTED 2026-04-05 01:04:14.368231 | orchestrator | 2026-04-05 01:04:14 | INFO  | Task 4a2602ed-76cc-4a67-94e6-216d83a097ad is in state STARTED 2026-04-05 01:04:14.368264 | orchestrator | 2026-04-05 01:04:14 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:04:17.415346 | orchestrator | 2026-04-05 01:04:17 | INFO  | Task ed55e999-0d10-4489-891f-422b9f86068d is in state STARTED 2026-04-05 01:04:17.415713 | orchestrator | 2026-04-05 01:04:17 | INFO  | Task b5e2ef28-6577-4f62-94bc-3cb1b5248189 is in state STARTED 2026-04-05 01:04:17.416341 | orchestrator | 2026-04-05 01:04:17 | INFO  | Task 93f2e915-87ca-427c-9e09-bf42bd2a4e4e is in state STARTED 2026-04-05 01:04:17.418189 | orchestrator | 2026-04-05 01:04:17 | INFO  | Task 4a2602ed-76cc-4a67-94e6-216d83a097ad is in state STARTED 2026-04-05 01:04:17.418618 | orchestrator | 2026-04-05 01:04:17 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:04:20.476411 | orchestrator | 2026-04-05 01:04:20 | INFO  | Task ed55e999-0d10-4489-891f-422b9f86068d is in state STARTED 2026-04-05 01:04:20.478303 | orchestrator | 2026-04-05 01:04:20 | INFO  | Task b5e2ef28-6577-4f62-94bc-3cb1b5248189 is in state STARTED 2026-04-05 01:04:20.482558 | orchestrator | 2026-04-05 01:04:20 | INFO  | Task 93f2e915-87ca-427c-9e09-bf42bd2a4e4e is in state STARTED 2026-04-05 01:04:20.484967 | orchestrator | 2026-04-05 01:04:20 | INFO  | Task 4a2602ed-76cc-4a67-94e6-216d83a097ad is in state STARTED 2026-04-05 01:04:20.485017 | orchestrator | 2026-04-05 01:04:20 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:04:23.531893 | orchestrator | 2026-04-05 01:04:23 | INFO  | Task ed55e999-0d10-4489-891f-422b9f86068d is in state STARTED 2026-04-05 01:04:23.534470 | orchestrator | 2026-04-05 01:04:23 | INFO  | Task b5e2ef28-6577-4f62-94bc-3cb1b5248189 is in state STARTED 2026-04-05 01:04:23.536636 | orchestrator | 2026-04-05 01:04:23 | INFO  | Task 93f2e915-87ca-427c-9e09-bf42bd2a4e4e is in state STARTED 2026-04-05 01:04:23.538562 | orchestrator | 2026-04-05 01:04:23 | INFO  | Task 4a2602ed-76cc-4a67-94e6-216d83a097ad is in state STARTED 2026-04-05 01:04:23.538618 | orchestrator | 2026-04-05 01:04:23 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:04:26.596087 | orchestrator | 2026-04-05 01:04:26 | INFO  | Task ed55e999-0d10-4489-891f-422b9f86068d is in state STARTED 2026-04-05 01:04:26.598249 | orchestrator | 2026-04-05 01:04:26 | INFO  | Task b5e2ef28-6577-4f62-94bc-3cb1b5248189 is in state STARTED 2026-04-05 01:04:26.599899 | orchestrator | 2026-04-05 01:04:26 | INFO  | Task 93f2e915-87ca-427c-9e09-bf42bd2a4e4e is in state STARTED 2026-04-05 01:04:26.602280 | orchestrator | 2026-04-05 01:04:26 | INFO  | Task 4a2602ed-76cc-4a67-94e6-216d83a097ad is in state STARTED 2026-04-05 01:04:26.602373 | orchestrator | 2026-04-05 01:04:26 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:04:29.687874 | orchestrator | 2026-04-05 01:04:29 | INFO  | Task ed55e999-0d10-4489-891f-422b9f86068d is in state STARTED 2026-04-05 01:04:29.690085 | orchestrator | 2026-04-05 01:04:29 | INFO  | Task b5e2ef28-6577-4f62-94bc-3cb1b5248189 is in state STARTED 2026-04-05 01:04:29.692737 | orchestrator | 2026-04-05 01:04:29 | INFO  | Task 93f2e915-87ca-427c-9e09-bf42bd2a4e4e is in state STARTED 2026-04-05 01:04:29.695292 | orchestrator | 2026-04-05 01:04:29 | INFO  | Task 4a2602ed-76cc-4a67-94e6-216d83a097ad is in state STARTED 2026-04-05 01:04:29.695496 | orchestrator | 2026-04-05 01:04:29 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:04:32.742332 | orchestrator | 2026-04-05 01:04:32 | INFO  | Task ed55e999-0d10-4489-891f-422b9f86068d is in state STARTED 2026-04-05 01:04:32.742986 | orchestrator | 2026-04-05 01:04:32 | INFO  | Task b5e2ef28-6577-4f62-94bc-3cb1b5248189 is in state STARTED 2026-04-05 01:04:32.744700 | orchestrator | 2026-04-05 01:04:32 | INFO  | Task 93f2e915-87ca-427c-9e09-bf42bd2a4e4e is in state STARTED 2026-04-05 01:04:32.745046 | orchestrator | 2026-04-05 01:04:32 | INFO  | Task 4a2602ed-76cc-4a67-94e6-216d83a097ad is in state STARTED 2026-04-05 01:04:32.745067 | orchestrator | 2026-04-05 01:04:32 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:04:35.774526 | orchestrator | 2026-04-05 01:04:35 | INFO  | Task ed55e999-0d10-4489-891f-422b9f86068d is in state STARTED 2026-04-05 01:04:35.774807 | orchestrator | 2026-04-05 01:04:35 | INFO  | Task b5e2ef28-6577-4f62-94bc-3cb1b5248189 is in state STARTED 2026-04-05 01:04:35.775520 | orchestrator | 2026-04-05 01:04:35 | INFO  | Task 93f2e915-87ca-427c-9e09-bf42bd2a4e4e is in state STARTED 2026-04-05 01:04:35.776203 | orchestrator | 2026-04-05 01:04:35 | INFO  | Task 4a2602ed-76cc-4a67-94e6-216d83a097ad is in state STARTED 2026-04-05 01:04:35.776445 | orchestrator | 2026-04-05 01:04:35 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:04:38.830098 | orchestrator | 2026-04-05 01:04:38 | INFO  | Task ed55e999-0d10-4489-891f-422b9f86068d is in state STARTED 2026-04-05 01:04:38.833216 | orchestrator | 2026-04-05 01:04:38 | INFO  | Task b5e2ef28-6577-4f62-94bc-3cb1b5248189 is in state STARTED 2026-04-05 01:04:38.835617 | orchestrator | 2026-04-05 01:04:38 | INFO  | Task 93f2e915-87ca-427c-9e09-bf42bd2a4e4e is in state STARTED 2026-04-05 01:04:38.835691 | orchestrator | 2026-04-05 01:04:38 | INFO  | Task 4a2602ed-76cc-4a67-94e6-216d83a097ad is in state STARTED 2026-04-05 01:04:38.835715 | orchestrator | 2026-04-05 01:04:38 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:04:41.898429 | orchestrator | 2026-04-05 01:04:41 | INFO  | Task ed55e999-0d10-4489-891f-422b9f86068d is in state STARTED 2026-04-05 01:04:41.898696 | orchestrator | 2026-04-05 01:04:41 | INFO  | Task b5e2ef28-6577-4f62-94bc-3cb1b5248189 is in state STARTED 2026-04-05 01:04:41.901178 | orchestrator | 2026-04-05 01:04:41 | INFO  | Task 93f2e915-87ca-427c-9e09-bf42bd2a4e4e is in state STARTED 2026-04-05 01:04:41.905885 | orchestrator | 2026-04-05 01:04:41 | INFO  | Task 4a2602ed-76cc-4a67-94e6-216d83a097ad is in state STARTED 2026-04-05 01:04:41.905959 | orchestrator | 2026-04-05 01:04:41 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:04:44.957936 | orchestrator | 2026-04-05 01:04:44 | INFO  | Task ed55e999-0d10-4489-891f-422b9f86068d is in state STARTED 2026-04-05 01:04:44.960418 | orchestrator | 2026-04-05 01:04:44 | INFO  | Task b5e2ef28-6577-4f62-94bc-3cb1b5248189 is in state STARTED 2026-04-05 01:04:44.962553 | orchestrator | 2026-04-05 01:04:44 | INFO  | Task 93f2e915-87ca-427c-9e09-bf42bd2a4e4e is in state STARTED 2026-04-05 01:04:44.964392 | orchestrator | 2026-04-05 01:04:44 | INFO  | Task 4a2602ed-76cc-4a67-94e6-216d83a097ad is in state STARTED 2026-04-05 01:04:44.964728 | orchestrator | 2026-04-05 01:04:44 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:04:48.007161 | orchestrator | 2026-04-05 01:04:48 | INFO  | Task ed55e999-0d10-4489-891f-422b9f86068d is in state STARTED 2026-04-05 01:04:48.007643 | orchestrator | 2026-04-05 01:04:48 | INFO  | Task b5e2ef28-6577-4f62-94bc-3cb1b5248189 is in state STARTED 2026-04-05 01:04:48.007906 | orchestrator | 2026-04-05 01:04:48 | INFO  | Task 93f2e915-87ca-427c-9e09-bf42bd2a4e4e is in state STARTED 2026-04-05 01:04:48.008797 | orchestrator | 2026-04-05 01:04:48 | INFO  | Task 4a2602ed-76cc-4a67-94e6-216d83a097ad is in state STARTED 2026-04-05 01:04:48.008831 | orchestrator | 2026-04-05 01:04:48 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:04:51.052611 | orchestrator | 2026-04-05 01:04:51 | INFO  | Task ed55e999-0d10-4489-891f-422b9f86068d is in state STARTED 2026-04-05 01:04:51.053848 | orchestrator | 2026-04-05 01:04:51 | INFO  | Task b5e2ef28-6577-4f62-94bc-3cb1b5248189 is in state STARTED 2026-04-05 01:04:51.056183 | orchestrator | 2026-04-05 01:04:51 | INFO  | Task 93f2e915-87ca-427c-9e09-bf42bd2a4e4e is in state STARTED 2026-04-05 01:04:51.058859 | orchestrator | 2026-04-05 01:04:51 | INFO  | Task 4a2602ed-76cc-4a67-94e6-216d83a097ad is in state STARTED 2026-04-05 01:04:51.058905 | orchestrator | 2026-04-05 01:04:51 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:04:54.101598 | orchestrator | 2026-04-05 01:04:54 | INFO  | Task ed55e999-0d10-4489-891f-422b9f86068d is in state STARTED 2026-04-05 01:04:54.102884 | orchestrator | 2026-04-05 01:04:54 | INFO  | Task b5e2ef28-6577-4f62-94bc-3cb1b5248189 is in state STARTED 2026-04-05 01:04:54.103729 | orchestrator | 2026-04-05 01:04:54 | INFO  | Task 93f2e915-87ca-427c-9e09-bf42bd2a4e4e is in state STARTED 2026-04-05 01:04:54.106775 | orchestrator | 2026-04-05 01:04:54 | INFO  | Task 4a2602ed-76cc-4a67-94e6-216d83a097ad is in state STARTED 2026-04-05 01:04:54.106799 | orchestrator | 2026-04-05 01:04:54 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:04:57.150235 | orchestrator | 2026-04-05 01:04:57 | INFO  | Task ed55e999-0d10-4489-891f-422b9f86068d is in state STARTED 2026-04-05 01:04:57.152696 | orchestrator | 2026-04-05 01:04:57 | INFO  | Task b5e2ef28-6577-4f62-94bc-3cb1b5248189 is in state STARTED 2026-04-05 01:04:57.157938 | orchestrator | 2026-04-05 01:04:57 | INFO  | Task 93f2e915-87ca-427c-9e09-bf42bd2a4e4e is in state STARTED 2026-04-05 01:04:57.160654 | orchestrator | 2026-04-05 01:04:57 | INFO  | Task 4a2602ed-76cc-4a67-94e6-216d83a097ad is in state STARTED 2026-04-05 01:04:57.160715 | orchestrator | 2026-04-05 01:04:57 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:05:00.200844 | orchestrator | 2026-04-05 01:05:00 | INFO  | Task ed55e999-0d10-4489-891f-422b9f86068d is in state STARTED 2026-04-05 01:05:00.203387 | orchestrator | 2026-04-05 01:05:00 | INFO  | Task b5e2ef28-6577-4f62-94bc-3cb1b5248189 is in state STARTED 2026-04-05 01:05:00.205952 | orchestrator | 2026-04-05 01:05:00 | INFO  | Task 93f2e915-87ca-427c-9e09-bf42bd2a4e4e is in state STARTED 2026-04-05 01:05:00.207299 | orchestrator | 2026-04-05 01:05:00 | INFO  | Task 4a2602ed-76cc-4a67-94e6-216d83a097ad is in state STARTED 2026-04-05 01:05:00.208020 | orchestrator | 2026-04-05 01:05:00 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:05:03.239787 | orchestrator | 2026-04-05 01:05:03 | INFO  | Task ed55e999-0d10-4489-891f-422b9f86068d is in state STARTED 2026-04-05 01:05:03.240149 | orchestrator | 2026-04-05 01:05:03 | INFO  | Task b5e2ef28-6577-4f62-94bc-3cb1b5248189 is in state STARTED 2026-04-05 01:05:03.241937 | orchestrator | 2026-04-05 01:05:03 | INFO  | Task 93f2e915-87ca-427c-9e09-bf42bd2a4e4e is in state STARTED 2026-04-05 01:05:03.242700 | orchestrator | 2026-04-05 01:05:03 | INFO  | Task 4a2602ed-76cc-4a67-94e6-216d83a097ad is in state STARTED 2026-04-05 01:05:03.242889 | orchestrator | 2026-04-05 01:05:03 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:05:06.328761 | orchestrator | 2026-04-05 01:05:06 | INFO  | Task ed55e999-0d10-4489-891f-422b9f86068d is in state STARTED 2026-04-05 01:05:06.328885 | orchestrator | 2026-04-05 01:05:06 | INFO  | Task b5e2ef28-6577-4f62-94bc-3cb1b5248189 is in state STARTED 2026-04-05 01:05:06.328903 | orchestrator | 2026-04-05 01:05:06 | INFO  | Task 93f2e915-87ca-427c-9e09-bf42bd2a4e4e is in state STARTED 2026-04-05 01:05:06.328915 | orchestrator | 2026-04-05 01:05:06 | INFO  | Task 4a2602ed-76cc-4a67-94e6-216d83a097ad is in state STARTED 2026-04-05 01:05:06.328927 | orchestrator | 2026-04-05 01:05:06 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:05:09.368330 | orchestrator | 2026-04-05 01:05:09 | INFO  | Task ed55e999-0d10-4489-891f-422b9f86068d is in state STARTED 2026-04-05 01:05:09.368707 | orchestrator | 2026-04-05 01:05:09 | INFO  | Task b5e2ef28-6577-4f62-94bc-3cb1b5248189 is in state STARTED 2026-04-05 01:05:09.369569 | orchestrator | 2026-04-05 01:05:09 | INFO  | Task 93f2e915-87ca-427c-9e09-bf42bd2a4e4e is in state STARTED 2026-04-05 01:05:09.370516 | orchestrator | 2026-04-05 01:05:09 | INFO  | Task 4a2602ed-76cc-4a67-94e6-216d83a097ad is in state STARTED 2026-04-05 01:05:09.370545 | orchestrator | 2026-04-05 01:05:09 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:05:12.395445 | orchestrator | 2026-04-05 01:05:12 | INFO  | Task ed55e999-0d10-4489-891f-422b9f86068d is in state STARTED 2026-04-05 01:05:12.396760 | orchestrator | 2026-04-05 01:05:12 | INFO  | Task b5e2ef28-6577-4f62-94bc-3cb1b5248189 is in state STARTED 2026-04-05 01:05:12.397409 | orchestrator | 2026-04-05 01:05:12 | INFO  | Task 93f2e915-87ca-427c-9e09-bf42bd2a4e4e is in state STARTED 2026-04-05 01:05:12.399674 | orchestrator | 2026-04-05 01:05:12 | INFO  | Task 4a2602ed-76cc-4a67-94e6-216d83a097ad is in state STARTED 2026-04-05 01:05:12.399751 | orchestrator | 2026-04-05 01:05:12 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:05:15.437230 | orchestrator | 2026-04-05 01:05:15 | INFO  | Task ed55e999-0d10-4489-891f-422b9f86068d is in state STARTED 2026-04-05 01:05:15.437688 | orchestrator | 2026-04-05 01:05:15 | INFO  | Task b5e2ef28-6577-4f62-94bc-3cb1b5248189 is in state STARTED 2026-04-05 01:05:15.439420 | orchestrator | 2026-04-05 01:05:15 | INFO  | Task 93f2e915-87ca-427c-9e09-bf42bd2a4e4e is in state STARTED 2026-04-05 01:05:15.440628 | orchestrator | 2026-04-05 01:05:15 | INFO  | Task 4a2602ed-76cc-4a67-94e6-216d83a097ad is in state STARTED 2026-04-05 01:05:15.441681 | orchestrator | 2026-04-05 01:05:15 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:05:18.466614 | orchestrator | 2026-04-05 01:05:18 | INFO  | Task ed55e999-0d10-4489-891f-422b9f86068d is in state STARTED 2026-04-05 01:05:18.467084 | orchestrator | 2026-04-05 01:05:18 | INFO  | Task b5e2ef28-6577-4f62-94bc-3cb1b5248189 is in state STARTED 2026-04-05 01:05:18.468341 | orchestrator | 2026-04-05 01:05:18 | INFO  | Task 93f2e915-87ca-427c-9e09-bf42bd2a4e4e is in state STARTED 2026-04-05 01:05:18.478243 | orchestrator | 2026-04-05 01:05:18 | INFO  | Task 4a2602ed-76cc-4a67-94e6-216d83a097ad is in state STARTED 2026-04-05 01:05:18.478296 | orchestrator | 2026-04-05 01:05:18 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:05:21.515856 | orchestrator | 2026-04-05 01:05:21 | INFO  | Task ed55e999-0d10-4489-891f-422b9f86068d is in state SUCCESS 2026-04-05 01:05:21.517248 | orchestrator | 2026-04-05 01:05:21.517340 | orchestrator | 2026-04-05 01:05:21.517368 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-05 01:05:21.517390 | orchestrator | 2026-04-05 01:05:21.517410 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-05 01:05:21.517630 | orchestrator | Sunday 05 April 2026 01:01:59 +0000 (0:00:00.343) 0:00:00.343 ********** 2026-04-05 01:05:21.517652 | orchestrator | ok: [testbed-manager] 2026-04-05 01:05:21.517675 | orchestrator | ok: [testbed-node-0] 2026-04-05 01:05:21.517772 | orchestrator | ok: [testbed-node-1] 2026-04-05 01:05:21.517793 | orchestrator | ok: [testbed-node-2] 2026-04-05 01:05:21.517811 | orchestrator | ok: [testbed-node-3] 2026-04-05 01:05:21.517824 | orchestrator | ok: [testbed-node-4] 2026-04-05 01:05:21.517836 | orchestrator | ok: [testbed-node-5] 2026-04-05 01:05:21.517849 | orchestrator | 2026-04-05 01:05:21.517862 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-05 01:05:21.517876 | orchestrator | Sunday 05 April 2026 01:02:00 +0000 (0:00:00.806) 0:00:01.150 ********** 2026-04-05 01:05:21.517893 | orchestrator | ok: [testbed-manager] => (item=enable_prometheus_True) 2026-04-05 01:05:21.517912 | orchestrator | ok: [testbed-node-0] => (item=enable_prometheus_True) 2026-04-05 01:05:21.517932 | orchestrator | ok: [testbed-node-1] => (item=enable_prometheus_True) 2026-04-05 01:05:21.517951 | orchestrator | ok: [testbed-node-2] => (item=enable_prometheus_True) 2026-04-05 01:05:21.517969 | orchestrator | ok: [testbed-node-3] => (item=enable_prometheus_True) 2026-04-05 01:05:21.518437 | orchestrator | ok: [testbed-node-4] => (item=enable_prometheus_True) 2026-04-05 01:05:21.518478 | orchestrator | ok: [testbed-node-5] => (item=enable_prometheus_True) 2026-04-05 01:05:21.518528 | orchestrator | 2026-04-05 01:05:21.518550 | orchestrator | PLAY [Apply role prometheus] *************************************************** 2026-04-05 01:05:21.518568 | orchestrator | 2026-04-05 01:05:21.518603 | orchestrator | TASK [prometheus : include_tasks] ********************************************** 2026-04-05 01:05:21.518622 | orchestrator | Sunday 05 April 2026 01:02:01 +0000 (0:00:00.999) 0:00:02.149 ********** 2026-04-05 01:05:21.518641 | orchestrator | included: /ansible/roles/prometheus/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-05 01:05:21.518663 | orchestrator | 2026-04-05 01:05:21.518683 | orchestrator | TASK [prometheus : Ensuring config directories exist] ************************** 2026-04-05 01:05:21.518701 | orchestrator | Sunday 05 April 2026 01:02:02 +0000 (0:00:01.444) 0:00:03.594 ********** 2026-04-05 01:05:21.518725 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-05 01:05:21.518749 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-05 01:05:21.518941 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-04-05 01:05:21.518960 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-05 01:05:21.519078 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 01:05:21.519105 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 01:05:21.519263 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-05 01:05:21.519289 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 01:05:21.519303 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 01:05:21.519317 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-05 01:05:21.519330 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-05 01:05:21.519343 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-05 01:05:21.519377 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 01:05:21.519392 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-05 01:05:21.519429 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-05 01:05:21.519450 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 01:05:21.519471 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-05 01:05:21.519493 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-05 01:05:21.519515 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-04-05 01:05:21.519605 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-05 01:05:21.519630 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-04-05 01:05:21.519668 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 01:05:21.519691 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-04-05 01:05:21.519711 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-05 01:05:21.519734 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-05 01:05:21.519812 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 01:05:21.519827 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 01:05:21.519894 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 01:05:21.519916 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-04-05 01:05:21.519928 | orchestrator | 2026-04-05 01:05:21.519939 | orchestrator | TASK [prometheus : include_tasks] ********************************************** 2026-04-05 01:05:21.519988 | orchestrator | Sunday 05 April 2026 01:02:07 +0000 (0:00:04.702) 0:00:08.296 ********** 2026-04-05 01:05:21.520001 | orchestrator | included: /ansible/roles/prometheus/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-05 01:05:21.520069 | orchestrator | 2026-04-05 01:05:21.520092 | orchestrator | TASK [service-cert-copy : prometheus | Copying over extra CA certificates] ***** 2026-04-05 01:05:21.520111 | orchestrator | Sunday 05 April 2026 01:02:09 +0000 (0:00:01.614) 0:00:09.911 ********** 2026-04-05 01:05:21.520131 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-05 01:05:21.520151 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-05 01:05:21.520171 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-04-05 01:05:21.520190 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-05 01:05:21.520246 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-05 01:05:21.520270 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-05 01:05:21.520390 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-05 01:05:21.520424 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-05 01:05:21.520436 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 01:05:21.520446 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 01:05:21.520456 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 01:05:21.520466 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-05 01:05:21.520507 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-05 01:05:21.520519 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-05 01:05:21.520534 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-05 01:05:21.520545 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 01:05:21.520555 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 01:05:21.520565 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-04-05 01:05:21.520575 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 01:05:21.520585 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-04-05 01:05:21.520614 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-04-05 01:05:21.520634 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-05 01:05:21.520659 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-04-05 01:05:21.520680 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-05 01:05:21.520698 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-05 01:05:21.520715 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 01:05:21.520733 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 01:05:21.520760 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 01:05:21.520771 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 01:05:21.520782 | orchestrator | 2026-04-05 01:05:21.520792 | orchestrator | TASK [service-cert-copy : prometheus | Copying over backend internal TLS certificate] *** 2026-04-05 01:05:21.520803 | orchestrator | Sunday 05 April 2026 01:02:15 +0000 (0:00:06.221) 0:00:16.133 ********** 2026-04-05 01:05:21.520866 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2026-04-05 01:05:21.520887 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-05 01:05:21.520904 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-05 01:05:21.520922 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2026-04-05 01:05:21.520973 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 01:05:21.520995 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-05 01:05:21.521035 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 01:05:21.521053 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 01:05:21.521063 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-05 01:05:21.521074 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 01:05:21.521091 | orchestrator | skipping: [testbed-manager] 2026-04-05 01:05:21.521102 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-05 01:05:21.521112 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 01:05:21.521140 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 01:05:21.521160 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-05 01:05:21.521179 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 01:05:21.521203 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:05:21.521225 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:05:21.521243 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-05 01:05:21.521262 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 01:05:21.521275 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 01:05:21.521292 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-05 01:05:21.521302 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 01:05:21.521331 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-05 01:05:21.521343 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:05:21.521354 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-05 01:05:21.521379 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-04-05 01:05:21.521406 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:05:21.521427 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-05 01:05:21.521444 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-05 01:05:21.521478 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-04-05 01:05:21.521496 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:05:21.521516 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-05 01:05:21.521533 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-05 01:05:21.521574 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-04-05 01:05:21.521593 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:05:21.521611 | orchestrator | 2026-04-05 01:05:21.521628 | orchestrator | TASK [service-cert-copy : prometheus | Copying over backend internal TLS key] *** 2026-04-05 01:05:21.521639 | orchestrator | Sunday 05 April 2026 01:02:16 +0000 (0:00:01.558) 0:00:17.691 ********** 2026-04-05 01:05:21.521656 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2026-04-05 01:05:21.521666 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-05 01:05:21.521684 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-05 01:05:21.521694 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2026-04-05 01:05:21.521705 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-05 01:05:21.521731 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 01:05:21.521742 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 01:05:21.521757 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 01:05:21.521767 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-05 01:05:21.521783 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 01:05:21.521793 | orchestrator | skipping: [testbed-manager] 2026-04-05 01:05:21.521803 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-05 01:05:21.521814 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 01:05:21.521824 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 01:05:21.521846 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-05 01:05:21.521858 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 01:05:21.521872 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-05 01:05:21.521888 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 01:05:21.521898 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 01:05:21.521908 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-05 01:05:21.521918 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 01:05:21.521928 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:05:21.521938 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:05:21.521949 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:05:21.521973 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-05 01:05:21.521984 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-05 01:05:21.521995 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-04-05 01:05:21.522176 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:05:21.522231 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-05 01:05:21.522245 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-05 01:05:21.522254 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-04-05 01:05:21.522263 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:05:21.522271 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-05 01:05:21.522279 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-05 01:05:21.522304 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-04-05 01:05:21.522313 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:05:21.522321 | orchestrator | 2026-04-05 01:05:21.522329 | orchestrator | TASK [prometheus : Copying over config.json files] ***************************** 2026-04-05 01:05:21.522338 | orchestrator | Sunday 05 April 2026 01:02:18 +0000 (0:00:02.018) 0:00:19.710 ********** 2026-04-05 01:05:21.522346 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-04-05 01:05:21.522366 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-05 01:05:21.522374 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-05 01:05:21.522383 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-05 01:05:21.522391 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-05 01:05:21.522399 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-05 01:05:21.522419 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-05 01:05:21.522428 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-05 01:05:21.522442 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 01:05:21.522454 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 01:05:21.522463 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 01:05:21.522471 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-05 01:05:21.522480 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-05 01:05:21.522508 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-05 01:05:21.522544 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-05 01:05:21.522573 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 01:05:21.522593 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 01:05:21.522606 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 01:05:21.522619 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-04-05 01:05:21.522635 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-04-05 01:05:21.522649 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-04-05 01:05:21.522681 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-04-05 01:05:21.522711 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-05 01:05:21.522726 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-05 01:05:21.522747 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-05 01:05:21.522764 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 01:05:21.522779 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 01:05:21.522794 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 01:05:21.522809 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 01:05:21.522823 | orchestrator | 2026-04-05 01:05:21.522839 | orchestrator | TASK [prometheus : Find custom prometheus alert rules files] ******************* 2026-04-05 01:05:21.522865 | orchestrator | Sunday 05 April 2026 01:02:25 +0000 (0:00:06.204) 0:00:25.915 ********** 2026-04-05 01:05:21.522879 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-05 01:05:21.522888 | orchestrator | 2026-04-05 01:05:21.522896 | orchestrator | TASK [prometheus : Copying over custom prometheus alert rules files] *********** 2026-04-05 01:05:21.522921 | orchestrator | Sunday 05 April 2026 01:02:26 +0000 (0:00:00.992) 0:00:26.907 ********** 2026-04-05 01:05:21.522930 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1318108, 'dev': 126, 'nlink': 1, 'atime': 1775347349.0, 'mtime': 1775347349.0, 'ctime': 1775348303.8147502, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-05 01:05:21.522945 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1318108, 'dev': 126, 'nlink': 1, 'atime': 1775347349.0, 'mtime': 1775347349.0, 'ctime': 1775348303.8147502, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-05 01:05:21.522953 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1318108, 'dev': 126, 'nlink': 1, 'atime': 1775347349.0, 'mtime': 1775347349.0, 'ctime': 1775348303.8147502, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-05 01:05:21.522962 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12944, 'inode': 1318130, 'dev': 126, 'nlink': 1, 'atime': 1775347349.0, 'mtime': 1775347349.0, 'ctime': 1775348303.81975, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-05 01:05:21.522970 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1318108, 'dev': 126, 'nlink': 1, 'atime': 1775347349.0, 'mtime': 1775347349.0, 'ctime': 1775348303.8147502, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-05 01:05:21.522979 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1318108, 'dev': 126, 'nlink': 1, 'atime': 1775347349.0, 'mtime': 1775347349.0, 'ctime': 1775348303.8147502, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-05 01:05:21.522997 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12944, 'inode': 1318130, 'dev': 126, 'nlink': 1, 'atime': 1775347349.0, 'mtime': 1775347349.0, 'ctime': 1775348303.81975, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-05 01:05:21.523006 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1318108, 'dev': 126, 'nlink': 1, 'atime': 1775347349.0, 'mtime': 1775347349.0, 'ctime': 1775348303.8147502, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-05 01:05:21.523040 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 56929, 'inode': 1318104, 'dev': 126, 'nlink': 1, 'atime': 1775347349.0, 'mtime': 1775347349.0, 'ctime': 1775348303.8134825, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-05 01:05:21.523049 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1318108, 'dev': 126, 'nlink': 1, 'atime': 1775347349.0, 'mtime': 1775347349.0, 'ctime': 1775348303.8147502, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-05 01:05:21.523058 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 56929, 'inode': 1318104, 'dev': 126, 'nlink': 1, 'atime': 1775347349.0, 'mtime': 1775347349.0, 'ctime': 1775348303.8134825, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-05 01:05:21.523066 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12944, 'inode': 1318130, 'dev': 126, 'nlink': 1, 'atime': 1775347349.0, 'mtime': 1775347349.0, 'ctime': 1775348303.81975, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-05 01:05:21.523075 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1318120, 'dev': 126, 'nlink': 1, 'atime': 1775347349.0, 'mtime': 1775347349.0, 'ctime': 1775348303.8175962, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-05 01:05:21.523101 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12944, 'inode': 1318130, 'dev': 126, 'nlink': 1, 'atime': 1775347349.0, 'mtime': 1775347349.0, 'ctime': 1775348303.81975, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-05 01:05:21.523110 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12944, 'inode': 1318130, 'dev': 126, 'nlink': 1, 'atime': 1775347349.0, 'mtime': 1775347349.0, 'ctime': 1775348303.81975, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-05 01:05:21.523118 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12944, 'inode': 1318130, 'dev': 126, 'nlink': 1, 'atime': 1775347349.0, 'mtime': 1775347349.0, 'ctime': 1775348303.81975, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-05 01:05:21.523130 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 56929, 'inode': 1318104, 'dev': 126, 'nlink': 1, 'atime': 1775347349.0, 'mtime': 1775347349.0, 'ctime': 1775348303.8134825, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-05 01:05:21.523138 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 56929, 'inode': 1318104, 'dev': 126, 'nlink': 1, 'atime': 1775347349.0, 'mtime': 1775347349.0, 'ctime': 1775348303.8134825, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-05 01:05:21.523146 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 56929, 'inode': 1318104, 'dev': 126, 'nlink': 1, 'atime': 1775347349.0, 'mtime': 1775347349.0, 'ctime': 1775348303.8134825, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-05 01:05:21.523155 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1318120, 'dev': 126, 'nlink': 1, 'atime': 1775347349.0, 'mtime': 1775347349.0, 'ctime': 1775348303.8175962, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-05 01:05:21.523178 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1318120, 'dev': 126, 'nlink': 1, 'atime': 1775347349.0, 'mtime': 1775347349.0, 'ctime': 1775348303.8175962, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-05 01:05:21.523187 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1318100, 'dev': 126, 'nlink': 1, 'atime': 1775347349.0, 'mtime': 1775347349.0, 'ctime': 1775348303.812689, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-05 01:05:21.523196 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 56929, 'inode': 1318104, 'dev': 126, 'nlink': 1, 'atime': 1775347349.0, 'mtime': 1775347349.0, 'ctime': 1775348303.8134825, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-05 01:05:21.523207 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1318120, 'dev': 126, 'nlink': 1, 'atime': 1775347349.0, 'mtime': 1775347349.0, 'ctime': 1775348303.8175962, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-05 01:05:21.523216 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1318100, 'dev': 126, 'nlink': 1, 'atime': 1775347349.0, 'mtime': 1775347349.0, 'ctime': 1775348303.812689, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-05 01:05:21.523224 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12944, 'inode': 1318130, 'dev': 126, 'nlink': 1, 'atime': 1775347349.0, 'mtime': 1775347349.0, 'ctime': 1775348303.81975, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-05 01:05:21.523233 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1318100, 'dev': 126, 'nlink': 1, 'atime': 1775347349.0, 'mtime': 1775347349.0, 'ctime': 1775348303.812689, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-05 01:05:21.523247 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1318110, 'dev': 126, 'nlink': 1, 'atime': 1775347349.0, 'mtime': 1775347349.0, 'ctime': 1775348303.8155217, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-05 01:05:21.523272 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1318120, 'dev': 126, 'nlink': 1, 'atime': 1775347349.0, 'mtime': 1775347349.0, 'ctime': 1775348303.8175962, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-05 01:05:21.523282 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1318110, 'dev': 126, 'nlink': 1, 'atime': 1775347349.0, 'mtime': 1775347349.0, 'ctime': 1775348303.8155217, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-05 01:05:21.523293 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1318120, 'dev': 126, 'nlink': 1, 'atime': 1775347349.0, 'mtime': 1775347349.0, 'ctime': 1775348303.8175962, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-05 01:05:21.523302 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1318100, 'dev': 126, 'nlink': 1, 'atime': 1775347349.0, 'mtime': 1775347349.0, 'ctime': 1775348303.812689, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-05 01:05:21.523310 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1318100, 'dev': 126, 'nlink': 1, 'atime': 1775347349.0, 'mtime': 1775347349.0, 'ctime': 1775348303.812689, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-05 01:05:21.523323 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1318110, 'dev': 126, 'nlink': 1, 'atime': 1775347349.0, 'mtime': 1775347349.0, 'ctime': 1775348303.8155217, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-05 01:05:21.523331 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1318110, 'dev': 126, 'nlink': 1, 'atime': 1775347349.0, 'mtime': 1775347349.0, 'ctime': 1775348303.8155217, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-05 01:05:21.523347 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 14018, 'inode': 1318119, 'dev': 126, 'nlink': 1, 'atime': 1775347349.0, 'mtime': 1775347349.0, 'ctime': 1775348303.8169186, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-05 01:05:21.523356 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 14018, 'inode': 1318119, 'dev': 126, 'nlink': 1, 'atime': 1775347349.0, 'mtime': 1775347349.0, 'ctime': 1775348303.8169186, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-05 01:05:21.523368 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1318100, 'dev': 126, 'nlink': 1, 'atime': 1775347349.0, 'mtime': 1775347349.0, 'ctime': 1775348303.812689, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-05 01:05:21.523376 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 14018, 'inode': 1318119, 'dev': 126, 'nlink': 1, 'atime': 1775347349.0, 'mtime': 1775347349.0, 'ctime': 1775348303.8169186, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-05 01:05:21.523384 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1318110, 'dev': 126, 'nlink': 1, 'atime': 1775347349.0, 'mtime': 1775347349.0, 'ctime': 1775348303.8155217, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-05 01:05:21.523397 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1318112, 'dev': 126, 'nlink': 1, 'atime': 1775347349.0, 'mtime': 1775347349.0, 'ctime': 1775348303.815807, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-05 01:05:21.523405 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 56929, 'inode': 1318104, 'dev': 126, 'nlink': 1, 'atime': 1775347349.0, 'mtime': 1775347349.0, 'ctime': 1775348303.8134825, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-05 01:05:21.523426 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1318110, 'dev': 126, 'nlink': 1, 'atime': 1775347349.0, 'mtime': 1775347349.0, 'ctime': 1775348303.8155217, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-05 01:05:21.523435 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 14018, 'inode': 1318119, 'dev': 126, 'nlink': 1, 'atime': 1775347349.0, 'mtime': 1775347349.0, 'ctime': 1775348303.8169186, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-05 01:05:21.523449 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1318112, 'dev': 126, 'nlink': 1, 'atime': 1775347349.0, 'mtime': 1775347349.0, 'ctime': 1775348303.815807, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-05 01:05:21.523457 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1318112, 'dev': 126, 'nlink': 1, 'atime': 1775347349.0, 'mtime': 1775347349.0, 'ctime': 1775348303.815807, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-05 01:05:21.523466 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1318112, 'dev': 126, 'nlink': 1, 'atime': 1775347349.0, 'mtime': 1775347349.0, 'ctime': 1775348303.815807, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-05 01:05:21.523478 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1318106, 'dev': 126, 'nlink': 1, 'atime': 1775347349.0, 'mtime': 1775347349.0, 'ctime': 1775348303.81375, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-05 01:05:21.523487 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1318106, 'dev': 126, 'nlink': 1, 'atime': 1775347349.0, 'mtime': 1775347349.0, 'ctime': 1775348303.81375, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-05 01:05:21.523507 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 14018, 'inode': 1318119, 'dev': 126, 'nlink': 1, 'atime': 1775347349.0, 'mtime': 1775347349.0, 'ctime': 1775348303.8169186, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-05 01:05:21.523516 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1318106, 'dev': 126, 'nlink': 1, 'atime': 1775347349.0, 'mtime': 1775347349.0, 'ctime': 1775348303.81375, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-05 01:05:21.523527 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 14018, 'inode': 1318119, 'dev': 126, 'nlink': 1, 'atime': 1775347349.0, 'mtime': 1775347349.0, 'ctime': 1775348303.8169186, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-05 01:05:21.523536 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1318129, 'dev': 126, 'nlink': 1, 'atime': 1775347349.0, 'mtime': 1775347349.0, 'ctime': 1775348303.8191857, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-05 01:05:21.523549 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1318106, 'dev': 126, 'nlink': 1, 'atime': 1775347349.0, 'mtime': 1775347349.0, 'ctime': 1775348303.81375, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-05 01:05:21.523557 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1318129, 'dev': 126, 'nlink': 1, 'atime': 1775347349.0, 'mtime': 1775347349.0, 'ctime': 1775348303.8191857, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-05 01:05:21.523565 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1318129, 'dev': 126, 'nlink': 1, 'atime': 1775347349.0, 'mtime': 1775347349.0, 'ctime': 1775348303.8191857, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-05 01:05:21.523584 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1318097, 'dev': 126, 'nlink': 1, 'atime': 1775347349.0, 'mtime': 1775347349.0, 'ctime': 1775348303.8119228, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-05 01:05:21.523593 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1318112, 'dev': 126, 'nlink': 1, 'atime': 1775347349.0, 'mtime': 1775347349.0, 'ctime': 1775348303.815807, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-05 01:05:21.523605 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1318112, 'dev': 126, 'nlink': 1, 'atime': 1775347349.0, 'mtime': 1775347349.0, 'ctime': 1775348303.815807, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-05 01:05:21.523613 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1318097, 'dev': 126, 'nlink': 1, 'atime': 1775347349.0, 'mtime': 1775347349.0, 'ctime': 1775348303.8119228, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-05 01:05:21.523626 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1318129, 'dev': 126, 'nlink': 1, 'atime': 1775347349.0, 'mtime': 1775347349.0, 'ctime': 1775348303.8191857, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-05 01:05:21.523635 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1318141, 'dev': 126, 'nlink': 1, 'atime': 1775347349.0, 'mtime': 1775347349.0, 'ctime': 1775348303.8219318, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-05 01:05:21.523643 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1318106, 'dev': 126, 'nlink': 1, 'atime': 1775347349.0, 'mtime': 1775347349.0, 'ctime': 1775348303.81375, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-05 01:05:21.523663 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1318097, 'dev': 126, 'nlink': 1, 'atime': 1775347349.0, 'mtime': 1775347349.0, 'ctime': 1775348303.8119228, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-05 01:05:21.523672 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1318129, 'dev': 126, 'nlink': 1, 'atime': 1775347349.0, 'mtime': 1775347349.0, 'ctime': 1775348303.8191857, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-05 01:05:21.523683 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1318141, 'dev': 126, 'nlink': 1, 'atime': 1775347349.0, 'mtime': 1775347349.0, 'ctime': 1775348303.8219318, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-05 01:05:21.523692 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1318106, 'dev': 126, 'nlink': 1, 'atime': 1775347349.0, 'mtime': 1775347349.0, 'ctime': 1775348303.81375, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-05 01:05:21.523705 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1318127, 'dev': 126, 'nlink': 1, 'atime': 1775347349.0, 'mtime': 1775347349.0, 'ctime': 1775348303.8187916, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-05 01:05:21.523713 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1318120, 'dev': 126, 'nlink': 1, 'atime': 1775347349.0, 'mtime': 1775347349.0, 'ctime': 1775348303.8175962, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-05 01:05:21.523722 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1318141, 'dev': 126, 'nlink': 1, 'atime': 1775347349.0, 'mtime': 1775347349.0, 'ctime': 1775348303.8219318, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-05 01:05:21.523742 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1318097, 'dev': 126, 'nlink': 1, 'atime': 1775347349.0, 'mtime': 1775347349.0, 'ctime': 1775348303.8119228, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-05 01:05:21.523758 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1318101, 'dev': 126, 'nlink': 1, 'atime': 1775347349.0, 'mtime': 1775347349.0, 'ctime': 1775348303.8131065, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-05 01:05:21.523778 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1318127, 'dev': 126, 'nlink': 1, 'atime': 1775347349.0, 'mtime': 1775347349.0, 'ctime': 1775348303.8187916, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-05 01:05:21.523793 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1318141, 'dev': 126, 'nlink': 1, 'atime': 1775347349.0, 'mtime': 1775347349.0, 'ctime': 1775348303.8219318, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-05 01:05:21.523815 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1318097, 'dev': 126, 'nlink': 1, 'atime': 1775347349.0, 'mtime': 1775347349.0, 'ctime': 1775348303.8119228, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-05 01:05:21.523829 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1318127, 'dev': 126, 'nlink': 1, 'atime': 1775347349.0, 'mtime': 1775347349.0, 'ctime': 1775348303.8187916, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-05 01:05:21.523844 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1318101, 'dev': 126, 'nlink': 1, 'atime': 1775347349.0, 'mtime': 1775347349.0, 'ctime': 1775348303.8131065, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-05 01:05:21.523879 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1318127, 'dev': 126, 'nlink': 1, 'atime': 1775347349.0, 'mtime': 1775347349.0, 'ctime': 1775348303.8187916, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-05 01:05:21.523896 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5065, 'inode': 1318098, 'dev': 126, 'nlink': 1, 'atime': 1775347349.0, 'mtime': 1775347349.0, 'ctime': 1775348303.8122168, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-05 01:05:21.523917 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1318129, 'dev': 126, 'nlink': 1, 'atime': 1775347349.0, 'mtime': 1775347349.0, 'ctime': 1775348303.8191857, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-05 01:05:21.523941 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5065, 'inode': 1318098, 'dev': 126, 'nlink': 1, 'atime': 1775347349.0, 'mtime': 1775347349.0, 'ctime': 1775348303.8122168, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-05 01:05:21.523956 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1318101, 'dev': 126, 'nlink': 1, 'atime': 1775347349.0, 'mtime': 1775347349.0, 'ctime': 1775348303.8131065, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-05 01:05:21.523971 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1318101, 'dev': 126, 'nlink': 1, 'atime': 1775347349.0, 'mtime': 1775347349.0, 'ctime': 1775348303.8131065, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-05 01:05:21.523986 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5065, 'inode': 1318098, 'dev': 126, 'nlink': 1, 'atime': 1775347349.0, 'mtime': 1775347349.0, 'ctime': 1775348303.8122168, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-05 01:05:21.524001 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1318100, 'dev': 126, 'nlink': 1, 'atime': 1775347349.0, 'mtime': 1775347349.0, 'ctime': 1775348303.812689, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-05 01:05:21.524026 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1318097, 'dev': 126, 'nlink': 1, 'atime': 1775347349.0, 'mtime': 1775347349.0, 'ctime': 1775348303.8119228, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-05 01:05:21.524040 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5065, 'inode': 1318098, 'dev': 126, 'nlink': 1, 'atime': 1775347349.0, 'mtime': 1775347349.0, 'ctime': 1775348303.8122168, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-05 01:05:21.524054 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1318141, 'dev': 126, 'nlink': 1, 'atime': 1775347349.0, 'mtime': 1775347349.0, 'ctime': 1775348303.8219318, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-05 01:05:21.524062 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1318116, 'dev': 126, 'nlink': 1, 'atime': 1775347349.0, 'mtime': 1775347349.0, 'ctime': 1775348303.8165126, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-05 01:05:21.524070 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1318141, 'dev': 126, 'nlink': 1, 'atime': 1775347349.0, 'mtime': 1775347349.0, 'ctime': 1775348303.8219318, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-05 01:05:21.524078 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1318116, 'dev': 126, 'nlink': 1, 'atime': 1775347349.0, 'mtime': 1775347349.0, 'ctime': 1775348303.8165126, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-05 01:05:21.524093 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1318116, 'dev': 126, 'nlink': 1, 'atime': 1775347349.0, 'mtime': 1775347349.0, 'ctime': 1775348303.8165126, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-05 01:05:21.524101 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1318116, 'dev': 126, 'nlink': 1, 'atime': 1775347349.0, 'mtime': 1775347349.0, 'ctime': 1775348303.8165126, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-05 01:05:21.524113 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1318127, 'dev': 126, 'nlink': 1, 'atime': 1775347349.0, 'mtime': 1775347349.0, 'ctime': 1775348303.8187916, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-05 01:05:21.524129 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1318113, 'dev': 126, 'nlink': 1, 'atime': 1775347349.0, 'mtime': 1775347349.0, 'ctime': 1775348303.8162818, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-05 01:05:21.524137 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1318113, 'dev': 126, 'nlink': 1, 'atime': 1775347349.0, 'mtime': 1775347349.0, 'ctime': 1775348303.8162818, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-05 01:05:21.524145 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1318127, 'dev': 126, 'nlink': 1, 'atime': 1775347349.0, 'mtime': 1775347349.0, 'ctime': 1775348303.8187916, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-05 01:05:21.524153 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1318110, 'dev': 126, 'nlink': 1, 'atime': 1775347349.0, 'mtime': 1775347349.0, 'ctime': 1775348303.8155217, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-05 01:05:21.524166 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1318113, 'dev': 126, 'nlink': 1, 'atime': 1775347349.0, 'mtime': 1775347349.0, 'ctime': 1775348303.8162818, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-05 01:05:21.524175 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1318113, 'dev': 126, 'nlink': 1, 'atime': 1775347349.0, 'mtime': 1775347349.0, 'ctime': 1775348303.8162818, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-05 01:05:21.524191 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1318101, 'dev': 126, 'nlink': 1, 'atime': 1775347349.0, 'mtime': 1775347349.0, 'ctime': 1775348303.8131065, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-05 01:05:21.524199 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1318101, 'dev': 126, 'nlink': 1, 'atime': 1775347349.0, 'mtime': 1775347349.0, 'ctime': 1775348303.8131065, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-05 01:05:21.524208 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1318140, 'dev': 126, 'nlink': 1, 'atime': 1775347349.0, 'mtime': 1775347349.0, 'ctime': 1775348303.8215337, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-05 01:05:21.524216 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:05:21.524224 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1318140, 'dev': 126, 'nlink': 1, 'atime': 1775347349.0, 'mtime': 1775347349.0, 'ctime': 1775348303.8215337, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-05 01:05:21.524232 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:05:21.524240 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1318140, 'dev': 126, 'nlink': 1, 'atime': 1775347349.0, 'mtime': 1775347349.0, 'ctime': 1775348303.8215337, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-05 01:05:21.524248 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:05:21.524261 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1318140, 'dev': 126, 'nlink': 1, 'atime': 1775347349.0, 'mtime': 1775347349.0, 'ctime': 1775348303.8215337, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-05 01:05:21.524269 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:05:21.524277 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5065, 'inode': 1318098, 'dev': 126, 'nlink': 1, 'atime': 1775347349.0, 'mtime': 1775347349.0, 'ctime': 1775348303.8122168, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-05 01:05:21.524295 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5065, 'inode': 1318098, 'dev': 126, 'nlink': 1, 'atime': 1775347349.0, 'mtime': 1775347349.0, 'ctime': 1775348303.8122168, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-05 01:05:21.524304 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1318116, 'dev': 126, 'nlink': 1, 'atime': 1775347349.0, 'mtime': 1775347349.0, 'ctime': 1775348303.8165126, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-05 01:05:21.524312 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1318116, 'dev': 126, 'nlink': 1, 'atime': 1775347349.0, 'mtime': 1775347349.0, 'ctime': 1775348303.8165126, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-05 01:05:21.524320 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 14018, 'inode': 1318119, 'dev': 126, 'nlink': 1, 'atime': 1775347349.0, 'mtime': 1775347349.0, 'ctime': 1775348303.8169186, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-05 01:05:21.524328 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1318113, 'dev': 126, 'nlink': 1, 'atime': 1775347349.0, 'mtime': 1775347349.0, 'ctime': 1775348303.8162818, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-05 01:05:21.524340 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1318113, 'dev': 126, 'nlink': 1, 'atime': 1775347349.0, 'mtime': 1775347349.0, 'ctime': 1775348303.8162818, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-05 01:05:21.524354 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1318140, 'dev': 126, 'nlink': 1, 'atime': 1775347349.0, 'mtime': 1775347349.0, 'ctime': 1775348303.8215337, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-05 01:05:21.524363 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:05:21.524374 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1318140, 'dev': 126, 'nlink': 1, 'atime': 1775347349.0, 'mtime': 1775347349.0, 'ctime': 1775348303.8215337, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-05 01:05:21.524383 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:05:21.524391 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1318112, 'dev': 126, 'nlink': 1, 'atime': 1775347349.0, 'mtime': 1775347349.0, 'ctime': 1775348303.815807, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-05 01:05:21.524399 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1318106, 'dev': 126, 'nlink': 1, 'atime': 1775347349.0, 'mtime': 1775347349.0, 'ctime': 1775348303.81375, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-05 01:05:21.524408 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1318129, 'dev': 126, 'nlink': 1, 'atime': 1775347349.0, 'mtime': 1775347349.0, 'ctime': 1775348303.8191857, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-05 01:05:21.524416 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1318097, 'dev': 126, 'nlink': 1, 'atime': 1775347349.0, 'mtime': 1775347349.0, 'ctime': 1775348303.8119228, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-05 01:05:21.524428 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1318141, 'dev': 126, 'nlink': 1, 'atime': 1775347349.0, 'mtime': 1775347349.0, 'ctime': 1775348303.8219318, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-05 01:05:21.524442 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1318127, 'dev': 126, 'nlink': 1, 'atime': 1775347349.0, 'mtime': 1775347349.0, 'ctime': 1775348303.8187916, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-05 01:05:21.524453 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1318101, 'dev': 126, 'nlink': 1, 'atime': 1775347349.0, 'mtime': 1775347349.0, 'ctime': 1775348303.8131065, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-05 01:05:21.524461 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5065, 'inode': 1318098, 'dev': 126, 'nlink': 1, 'atime': 1775347349.0, 'mtime': 1775347349.0, 'ctime': 1775348303.8122168, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-05 01:05:21.524469 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1318116, 'dev': 126, 'nlink': 1, 'atime': 1775347349.0, 'mtime': 1775347349.0, 'ctime': 1775348303.8165126, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-05 01:05:21.524477 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1318113, 'dev': 126, 'nlink': 1, 'atime': 1775347349.0, 'mtime': 1775347349.0, 'ctime': 1775348303.8162818, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-05 01:05:21.524486 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1318140, 'dev': 126, 'nlink': 1, 'atime': 1775347349.0, 'mtime': 1775347349.0, 'ctime': 1775348303.8215337, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-05 01:05:21.524494 | orchestrator | 2026-04-05 01:05:21.524502 | orchestrator | TASK [prometheus : Find prometheus common config overrides] ******************** 2026-04-05 01:05:21.524517 | orchestrator | Sunday 05 April 2026 01:02:54 +0000 (0:00:28.034) 0:00:54.941 ********** 2026-04-05 01:05:21.524525 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-05 01:05:21.524533 | orchestrator | 2026-04-05 01:05:21.524545 | orchestrator | TASK [prometheus : Find prometheus host config overrides] ********************** 2026-04-05 01:05:21.524553 | orchestrator | Sunday 05 April 2026 01:02:55 +0000 (0:00:01.041) 0:00:55.983 ********** 2026-04-05 01:05:21.524561 | orchestrator | [WARNING]: Skipped 2026-04-05 01:05:21.524570 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-04-05 01:05:21.524578 | orchestrator | manager/prometheus.yml.d' path due to this access issue: 2026-04-05 01:05:21.524586 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-04-05 01:05:21.524594 | orchestrator | manager/prometheus.yml.d' is not a directory 2026-04-05 01:05:21.524602 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-05 01:05:21.524609 | orchestrator | [WARNING]: Skipped 2026-04-05 01:05:21.524617 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-04-05 01:05:21.524625 | orchestrator | node-0/prometheus.yml.d' path due to this access issue: 2026-04-05 01:05:21.524633 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-04-05 01:05:21.524640 | orchestrator | node-0/prometheus.yml.d' is not a directory 2026-04-05 01:05:21.524648 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-05 01:05:21.524657 | orchestrator | [WARNING]: Skipped 2026-04-05 01:05:21.524665 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-04-05 01:05:21.524673 | orchestrator | node-1/prometheus.yml.d' path due to this access issue: 2026-04-05 01:05:21.524680 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-04-05 01:05:21.524688 | orchestrator | node-1/prometheus.yml.d' is not a directory 2026-04-05 01:05:21.524700 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-04-05 01:05:21.524708 | orchestrator | [WARNING]: Skipped 2026-04-05 01:05:21.524716 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-04-05 01:05:21.524723 | orchestrator | node-2/prometheus.yml.d' path due to this access issue: 2026-04-05 01:05:21.524731 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-04-05 01:05:21.524739 | orchestrator | node-2/prometheus.yml.d' is not a directory 2026-04-05 01:05:21.524747 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-04-05 01:05:21.524754 | orchestrator | [WARNING]: Skipped 2026-04-05 01:05:21.524762 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-04-05 01:05:21.524770 | orchestrator | node-3/prometheus.yml.d' path due to this access issue: 2026-04-05 01:05:21.524778 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-04-05 01:05:21.524786 | orchestrator | node-3/prometheus.yml.d' is not a directory 2026-04-05 01:05:21.524794 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-04-05 01:05:21.524802 | orchestrator | [WARNING]: Skipped 2026-04-05 01:05:21.524810 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-04-05 01:05:21.524817 | orchestrator | node-5/prometheus.yml.d' path due to this access issue: 2026-04-05 01:05:21.524825 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-04-05 01:05:21.524833 | orchestrator | node-5/prometheus.yml.d' is not a directory 2026-04-05 01:05:21.524841 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-04-05 01:05:21.524848 | orchestrator | [WARNING]: Skipped 2026-04-05 01:05:21.524856 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-04-05 01:05:21.524864 | orchestrator | node-4/prometheus.yml.d' path due to this access issue: 2026-04-05 01:05:21.524872 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-04-05 01:05:21.524885 | orchestrator | node-4/prometheus.yml.d' is not a directory 2026-04-05 01:05:21.524893 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-04-05 01:05:21.524901 | orchestrator | 2026-04-05 01:05:21.524915 | orchestrator | TASK [prometheus : Copying over prometheus config file] ************************ 2026-04-05 01:05:21.524929 | orchestrator | Sunday 05 April 2026 01:02:58 +0000 (0:00:02.944) 0:00:58.928 ********** 2026-04-05 01:05:21.524942 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-04-05 01:05:21.524956 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:05:21.524969 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-04-05 01:05:21.524982 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:05:21.524996 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-04-05 01:05:21.525036 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:05:21.525051 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-04-05 01:05:21.525066 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:05:21.525080 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-04-05 01:05:21.525095 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:05:21.525108 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-04-05 01:05:21.525122 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:05:21.525136 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2) 2026-04-05 01:05:21.525150 | orchestrator | 2026-04-05 01:05:21.525162 | orchestrator | TASK [prometheus : Copying over prometheus web config file] ******************** 2026-04-05 01:05:21.525170 | orchestrator | Sunday 05 April 2026 01:03:16 +0000 (0:00:17.803) 0:01:16.731 ********** 2026-04-05 01:05:21.525178 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-04-05 01:05:21.525194 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:05:21.525202 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-04-05 01:05:21.525210 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:05:21.525217 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-04-05 01:05:21.525225 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:05:21.525233 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-04-05 01:05:21.525241 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:05:21.525249 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-04-05 01:05:21.525257 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:05:21.525264 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-04-05 01:05:21.525272 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:05:21.525280 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2) 2026-04-05 01:05:21.525288 | orchestrator | 2026-04-05 01:05:21.525296 | orchestrator | TASK [prometheus : Copying over prometheus alertmanager config file] *********** 2026-04-05 01:05:21.525303 | orchestrator | Sunday 05 April 2026 01:03:20 +0000 (0:00:04.142) 0:01:20.874 ********** 2026-04-05 01:05:21.525311 | orchestrator | skipping: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-04-05 01:05:21.525320 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:05:21.525334 | orchestrator | skipping: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-04-05 01:05:21.525342 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:05:21.525350 | orchestrator | skipping: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-04-05 01:05:21.525364 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:05:21.525372 | orchestrator | skipping: [testbed-node-3] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-04-05 01:05:21.525380 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:05:21.525388 | orchestrator | skipping: [testbed-node-5] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-04-05 01:05:21.525396 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:05:21.525404 | orchestrator | skipping: [testbed-node-4] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-04-05 01:05:21.525412 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:05:21.525420 | orchestrator | changed: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml) 2026-04-05 01:05:21.525427 | orchestrator | 2026-04-05 01:05:21.525435 | orchestrator | TASK [prometheus : Find custom Alertmanager alert notification templates] ****** 2026-04-05 01:05:21.525443 | orchestrator | Sunday 05 April 2026 01:03:22 +0000 (0:00:02.510) 0:01:23.384 ********** 2026-04-05 01:05:21.525451 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-05 01:05:21.525458 | orchestrator | 2026-04-05 01:05:21.525467 | orchestrator | TASK [prometheus : Copying over custom Alertmanager alert notification templates] *** 2026-04-05 01:05:21.525475 | orchestrator | Sunday 05 April 2026 01:03:23 +0000 (0:00:00.925) 0:01:24.310 ********** 2026-04-05 01:05:21.525482 | orchestrator | skipping: [testbed-manager] 2026-04-05 01:05:21.525490 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:05:21.525498 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:05:21.525506 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:05:21.525513 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:05:21.525521 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:05:21.525529 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:05:21.525536 | orchestrator | 2026-04-05 01:05:21.525544 | orchestrator | TASK [prometheus : Copying over my.cnf for mysqld_exporter] ******************** 2026-04-05 01:05:21.525552 | orchestrator | Sunday 05 April 2026 01:03:24 +0000 (0:00:00.809) 0:01:25.120 ********** 2026-04-05 01:05:21.525560 | orchestrator | skipping: [testbed-manager] 2026-04-05 01:05:21.525567 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:05:21.525575 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:05:21.525583 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:05:21.525590 | orchestrator | changed: [testbed-node-1] 2026-04-05 01:05:21.525598 | orchestrator | changed: [testbed-node-0] 2026-04-05 01:05:21.525606 | orchestrator | changed: [testbed-node-2] 2026-04-05 01:05:21.525614 | orchestrator | 2026-04-05 01:05:21.525622 | orchestrator | TASK [prometheus : Copying cloud config file for openstack exporter] *********** 2026-04-05 01:05:21.525629 | orchestrator | Sunday 05 April 2026 01:03:27 +0000 (0:00:02.644) 0:01:27.764 ********** 2026-04-05 01:05:21.525637 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-04-05 01:05:21.525645 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:05:21.525653 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-04-05 01:05:21.525661 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:05:21.525668 | orchestrator | skipping: [testbed-manager] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-04-05 01:05:21.525676 | orchestrator | skipping: [testbed-manager] 2026-04-05 01:05:21.525684 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-04-05 01:05:21.525692 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:05:21.525704 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-04-05 01:05:21.525712 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:05:21.525720 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-04-05 01:05:21.525733 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:05:21.525741 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-04-05 01:05:21.525749 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:05:21.525756 | orchestrator | 2026-04-05 01:05:21.525764 | orchestrator | TASK [prometheus : Copying config file for blackbox exporter] ****************** 2026-04-05 01:05:21.525772 | orchestrator | Sunday 05 April 2026 01:03:29 +0000 (0:00:02.392) 0:01:30.157 ********** 2026-04-05 01:05:21.525780 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-04-05 01:05:21.525788 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-04-05 01:05:21.525795 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-04-05 01:05:21.525803 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-04-05 01:05:21.525811 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2) 2026-04-05 01:05:21.525819 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:05:21.525827 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:05:21.525835 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:05:21.525849 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:05:21.525857 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-04-05 01:05:21.525865 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:05:21.525873 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-04-05 01:05:21.525881 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:05:21.525889 | orchestrator | 2026-04-05 01:05:21.525897 | orchestrator | TASK [prometheus : Find extra prometheus server config files] ****************** 2026-04-05 01:05:21.525904 | orchestrator | Sunday 05 April 2026 01:03:32 +0000 (0:00:02.691) 0:01:32.849 ********** 2026-04-05 01:05:21.525912 | orchestrator | [WARNING]: Skipped 2026-04-05 01:05:21.525920 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/extras/' path 2026-04-05 01:05:21.525928 | orchestrator | due to this access issue: 2026-04-05 01:05:21.525936 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/extras/' is 2026-04-05 01:05:21.525943 | orchestrator | not a directory 2026-04-05 01:05:21.525951 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-05 01:05:21.525959 | orchestrator | 2026-04-05 01:05:21.525967 | orchestrator | TASK [prometheus : Create subdirectories for extra config files] *************** 2026-04-05 01:05:21.525975 | orchestrator | Sunday 05 April 2026 01:03:33 +0000 (0:00:01.215) 0:01:34.064 ********** 2026-04-05 01:05:21.525983 | orchestrator | skipping: [testbed-manager] 2026-04-05 01:05:21.525991 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:05:21.525998 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:05:21.526006 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:05:21.526154 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:05:21.526176 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:05:21.526184 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:05:21.526192 | orchestrator | 2026-04-05 01:05:21.526201 | orchestrator | TASK [prometheus : Template extra prometheus server config files] ************** 2026-04-05 01:05:21.526209 | orchestrator | Sunday 05 April 2026 01:03:34 +0000 (0:00:00.657) 0:01:34.722 ********** 2026-04-05 01:05:21.526216 | orchestrator | skipping: [testbed-manager] 2026-04-05 01:05:21.526224 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:05:21.526232 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:05:21.526239 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:05:21.526247 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:05:21.526263 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:05:21.526271 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:05:21.526279 | orchestrator | 2026-04-05 01:05:21.526287 | orchestrator | TASK [prometheus : Check prometheus containers] ******************************** 2026-04-05 01:05:21.526294 | orchestrator | Sunday 05 April 2026 01:03:34 +0000 (0:00:00.934) 0:01:35.656 ********** 2026-04-05 01:05:21.526304 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-05 01:05:21.526314 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-05 01:05:21.526333 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-05 01:05:21.526341 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-05 01:05:21.526355 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-05 01:05:21.526364 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-04-05 01:05:21.526372 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-05 01:05:21.526386 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 01:05:21.526395 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 01:05:21.526409 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 01:05:21.526417 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-05 01:05:21.526429 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-05 01:05:21.526438 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-05 01:05:21.526446 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-04-05 01:05:21.526459 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-05 01:05:21.526467 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 01:05:21.526476 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 01:05:21.526489 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 01:05:21.526498 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-04-05 01:05:21.526509 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-05 01:05:21.526518 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-04-05 01:05:21.526533 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-05 01:05:21.526541 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-05 01:05:21.526550 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-05 01:05:21.526563 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-04-05 01:05:21.526571 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 01:05:21.526584 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 01:05:21.526592 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 01:05:21.526600 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 01:05:21.526613 | orchestrator | 2026-04-05 01:05:21.526622 | orchestrator | TASK [prometheus : Creating prometheus database user and setting permissions] *** 2026-04-05 01:05:21.526630 | orchestrator | Sunday 05 April 2026 01:03:40 +0000 (0:00:05.599) 0:01:41.256 ********** 2026-04-05 01:05:21.526638 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2026-04-05 01:05:21.526646 | orchestrator | skipping: [testbed-manager] 2026-04-05 01:05:21.526653 | orchestrator | 2026-04-05 01:05:21.526661 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-04-05 01:05:21.526669 | orchestrator | Sunday 05 April 2026 01:03:41 +0000 (0:00:01.361) 0:01:42.618 ********** 2026-04-05 01:05:21.526677 | orchestrator | 2026-04-05 01:05:21.526685 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-04-05 01:05:21.526693 | orchestrator | Sunday 05 April 2026 01:03:42 +0000 (0:00:00.095) 0:01:42.714 ********** 2026-04-05 01:05:21.526701 | orchestrator | 2026-04-05 01:05:21.526708 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-04-05 01:05:21.526717 | orchestrator | Sunday 05 April 2026 01:03:42 +0000 (0:00:00.071) 0:01:42.785 ********** 2026-04-05 01:05:21.526725 | orchestrator | 2026-04-05 01:05:21.526732 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-04-05 01:05:21.526740 | orchestrator | Sunday 05 April 2026 01:03:42 +0000 (0:00:00.082) 0:01:42.868 ********** 2026-04-05 01:05:21.526748 | orchestrator | 2026-04-05 01:05:21.526755 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-04-05 01:05:21.526763 | orchestrator | Sunday 05 April 2026 01:03:42 +0000 (0:00:00.077) 0:01:42.945 ********** 2026-04-05 01:05:21.526771 | orchestrator | 2026-04-05 01:05:21.526779 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-04-05 01:05:21.526786 | orchestrator | Sunday 05 April 2026 01:03:42 +0000 (0:00:00.073) 0:01:43.019 ********** 2026-04-05 01:05:21.526794 | orchestrator | 2026-04-05 01:05:21.526803 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-04-05 01:05:21.526810 | orchestrator | Sunday 05 April 2026 01:03:42 +0000 (0:00:00.073) 0:01:43.092 ********** 2026-04-05 01:05:21.526818 | orchestrator | 2026-04-05 01:05:21.526826 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-server container] ************* 2026-04-05 01:05:21.526834 | orchestrator | Sunday 05 April 2026 01:03:42 +0000 (0:00:00.113) 0:01:43.206 ********** 2026-04-05 01:05:21.526842 | orchestrator | changed: [testbed-manager] 2026-04-05 01:05:21.526849 | orchestrator | 2026-04-05 01:05:21.526857 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-node-exporter container] ****** 2026-04-05 01:05:21.526870 | orchestrator | Sunday 05 April 2026 01:03:59 +0000 (0:00:17.393) 0:02:00.600 ********** 2026-04-05 01:05:21.526878 | orchestrator | changed: [testbed-node-3] 2026-04-05 01:05:21.526885 | orchestrator | changed: [testbed-node-4] 2026-04-05 01:05:21.526893 | orchestrator | changed: [testbed-manager] 2026-04-05 01:05:21.526901 | orchestrator | changed: [testbed-node-5] 2026-04-05 01:05:21.526908 | orchestrator | changed: [testbed-node-0] 2026-04-05 01:05:21.526916 | orchestrator | changed: [testbed-node-1] 2026-04-05 01:05:21.526924 | orchestrator | changed: [testbed-node-2] 2026-04-05 01:05:21.526932 | orchestrator | 2026-04-05 01:05:21.526940 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-mysqld-exporter container] **** 2026-04-05 01:05:21.526947 | orchestrator | Sunday 05 April 2026 01:04:16 +0000 (0:00:16.584) 0:02:17.184 ********** 2026-04-05 01:05:21.526955 | orchestrator | changed: [testbed-node-1] 2026-04-05 01:05:21.526963 | orchestrator | changed: [testbed-node-2] 2026-04-05 01:05:21.526975 | orchestrator | changed: [testbed-node-0] 2026-04-05 01:05:21.526983 | orchestrator | 2026-04-05 01:05:21.526991 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-memcached-exporter container] *** 2026-04-05 01:05:21.526999 | orchestrator | Sunday 05 April 2026 01:04:27 +0000 (0:00:10.914) 0:02:28.099 ********** 2026-04-05 01:05:21.527007 | orchestrator | changed: [testbed-node-1] 2026-04-05 01:05:21.527044 | orchestrator | changed: [testbed-node-2] 2026-04-05 01:05:21.527059 | orchestrator | changed: [testbed-node-0] 2026-04-05 01:05:21.527073 | orchestrator | 2026-04-05 01:05:21.527087 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-cadvisor container] *********** 2026-04-05 01:05:21.527096 | orchestrator | Sunday 05 April 2026 01:04:32 +0000 (0:00:05.237) 0:02:33.336 ********** 2026-04-05 01:05:21.527104 | orchestrator | changed: [testbed-node-1] 2026-04-05 01:05:21.527112 | orchestrator | changed: [testbed-node-3] 2026-04-05 01:05:21.527120 | orchestrator | changed: [testbed-manager] 2026-04-05 01:05:21.527128 | orchestrator | changed: [testbed-node-2] 2026-04-05 01:05:21.527136 | orchestrator | changed: [testbed-node-5] 2026-04-05 01:05:21.527143 | orchestrator | changed: [testbed-node-4] 2026-04-05 01:05:21.527155 | orchestrator | changed: [testbed-node-0] 2026-04-05 01:05:21.527163 | orchestrator | 2026-04-05 01:05:21.527171 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-alertmanager container] ******* 2026-04-05 01:05:21.527179 | orchestrator | Sunday 05 April 2026 01:04:46 +0000 (0:00:13.593) 0:02:46.930 ********** 2026-04-05 01:05:21.527187 | orchestrator | changed: [testbed-manager] 2026-04-05 01:05:21.527195 | orchestrator | 2026-04-05 01:05:21.527202 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-elasticsearch-exporter container] *** 2026-04-05 01:05:21.527210 | orchestrator | Sunday 05 April 2026 01:04:54 +0000 (0:00:08.412) 0:02:55.343 ********** 2026-04-05 01:05:21.527218 | orchestrator | changed: [testbed-node-1] 2026-04-05 01:05:21.527226 | orchestrator | changed: [testbed-node-0] 2026-04-05 01:05:21.527234 | orchestrator | changed: [testbed-node-2] 2026-04-05 01:05:21.527242 | orchestrator | 2026-04-05 01:05:21.527250 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-blackbox-exporter container] *** 2026-04-05 01:05:21.527257 | orchestrator | Sunday 05 April 2026 01:04:59 +0000 (0:00:05.285) 0:03:00.629 ********** 2026-04-05 01:05:21.527265 | orchestrator | changed: [testbed-manager] 2026-04-05 01:05:21.527273 | orchestrator | 2026-04-05 01:05:21.527281 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-libvirt-exporter container] *** 2026-04-05 01:05:21.527289 | orchestrator | Sunday 05 April 2026 01:05:09 +0000 (0:00:09.880) 0:03:10.509 ********** 2026-04-05 01:05:21.527297 | orchestrator | changed: [testbed-node-4] 2026-04-05 01:05:21.527305 | orchestrator | changed: [testbed-node-3] 2026-04-05 01:05:21.527312 | orchestrator | changed: [testbed-node-5] 2026-04-05 01:05:21.527320 | orchestrator | 2026-04-05 01:05:21.527328 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-05 01:05:21.527336 | orchestrator | testbed-manager : ok=23  changed=14  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2026-04-05 01:05:21.527344 | orchestrator | testbed-node-0 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-04-05 01:05:21.527353 | orchestrator | testbed-node-1 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-04-05 01:05:21.527361 | orchestrator | testbed-node-2 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-04-05 01:05:21.527368 | orchestrator | testbed-node-3 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-04-05 01:05:21.527376 | orchestrator | testbed-node-4 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-04-05 01:05:21.527384 | orchestrator | testbed-node-5 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-04-05 01:05:21.527398 | orchestrator | 2026-04-05 01:05:21.527406 | orchestrator | 2026-04-05 01:05:21.527414 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-05 01:05:21.527422 | orchestrator | Sunday 05 April 2026 01:05:20 +0000 (0:00:10.481) 0:03:20.991 ********** 2026-04-05 01:05:21.527430 | orchestrator | =============================================================================== 2026-04-05 01:05:21.527438 | orchestrator | prometheus : Copying over custom prometheus alert rules files ---------- 28.03s 2026-04-05 01:05:21.527445 | orchestrator | prometheus : Copying over prometheus config file ----------------------- 17.80s 2026-04-05 01:05:21.527453 | orchestrator | prometheus : Restart prometheus-server container ----------------------- 17.39s 2026-04-05 01:05:21.527461 | orchestrator | prometheus : Restart prometheus-node-exporter container ---------------- 16.58s 2026-04-05 01:05:21.527469 | orchestrator | prometheus : Restart prometheus-cadvisor container --------------------- 13.59s 2026-04-05 01:05:21.527483 | orchestrator | prometheus : Restart prometheus-mysqld-exporter container -------------- 10.91s 2026-04-05 01:05:21.527491 | orchestrator | prometheus : Restart prometheus-libvirt-exporter container ------------- 10.48s 2026-04-05 01:05:21.527499 | orchestrator | prometheus : Restart prometheus-blackbox-exporter container ------------- 9.88s 2026-04-05 01:05:21.527507 | orchestrator | prometheus : Restart prometheus-alertmanager container ------------------ 8.41s 2026-04-05 01:05:21.527514 | orchestrator | service-cert-copy : prometheus | Copying over extra CA certificates ----- 6.22s 2026-04-05 01:05:21.527522 | orchestrator | prometheus : Copying over config.json files ----------------------------- 6.20s 2026-04-05 01:05:21.527530 | orchestrator | prometheus : Check prometheus containers -------------------------------- 5.60s 2026-04-05 01:05:21.527538 | orchestrator | prometheus : Restart prometheus-elasticsearch-exporter container -------- 5.29s 2026-04-05 01:05:21.527546 | orchestrator | prometheus : Restart prometheus-memcached-exporter container ------------ 5.24s 2026-04-05 01:05:21.527554 | orchestrator | prometheus : Ensuring config directories exist -------------------------- 4.70s 2026-04-05 01:05:21.527562 | orchestrator | prometheus : Copying over prometheus web config file -------------------- 4.14s 2026-04-05 01:05:21.527570 | orchestrator | prometheus : Find prometheus host config overrides ---------------------- 2.94s 2026-04-05 01:05:21.527578 | orchestrator | prometheus : Copying config file for blackbox exporter ------------------ 2.69s 2026-04-05 01:05:21.527585 | orchestrator | prometheus : Copying over my.cnf for mysqld_exporter -------------------- 2.64s 2026-04-05 01:05:21.527593 | orchestrator | prometheus : Copying over prometheus alertmanager config file ----------- 2.51s 2026-04-05 01:05:21.527605 | orchestrator | 2026-04-05 01:05:21 | INFO  | Task b5e2ef28-6577-4f62-94bc-3cb1b5248189 is in state STARTED 2026-04-05 01:05:21.527613 | orchestrator | 2026-04-05 01:05:21 | INFO  | Task 93f2e915-87ca-427c-9e09-bf42bd2a4e4e is in state STARTED 2026-04-05 01:05:21.527621 | orchestrator | 2026-04-05 01:05:21 | INFO  | Task 4a2602ed-76cc-4a67-94e6-216d83a097ad is in state STARTED 2026-04-05 01:05:21.527629 | orchestrator | 2026-04-05 01:05:21 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:05:24.577184 | orchestrator | 2026-04-05 01:05:24 | INFO  | Task b5e2ef28-6577-4f62-94bc-3cb1b5248189 is in state STARTED 2026-04-05 01:05:24.578576 | orchestrator | 2026-04-05 01:05:24 | INFO  | Task a08d6d8a-a2e0-443e-ba8e-aa7bb227cfe9 is in state STARTED 2026-04-05 01:05:24.581824 | orchestrator | 2026-04-05 01:05:24 | INFO  | Task 93f2e915-87ca-427c-9e09-bf42bd2a4e4e is in state STARTED 2026-04-05 01:05:24.583052 | orchestrator | 2026-04-05 01:05:24 | INFO  | Task 4a2602ed-76cc-4a67-94e6-216d83a097ad is in state STARTED 2026-04-05 01:05:24.583083 | orchestrator | 2026-04-05 01:05:24 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:05:27.632847 | orchestrator | 2026-04-05 01:05:27 | INFO  | Task b5e2ef28-6577-4f62-94bc-3cb1b5248189 is in state STARTED 2026-04-05 01:05:27.633981 | orchestrator | 2026-04-05 01:05:27 | INFO  | Task a08d6d8a-a2e0-443e-ba8e-aa7bb227cfe9 is in state STARTED 2026-04-05 01:05:27.636264 | orchestrator | 2026-04-05 01:05:27 | INFO  | Task 93f2e915-87ca-427c-9e09-bf42bd2a4e4e is in state STARTED 2026-04-05 01:05:27.637327 | orchestrator | 2026-04-05 01:05:27 | INFO  | Task 4a2602ed-76cc-4a67-94e6-216d83a097ad is in state STARTED 2026-04-05 01:05:27.637355 | orchestrator | 2026-04-05 01:05:27 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:05:30.671514 | orchestrator | 2026-04-05 01:05:30 | INFO  | Task b5e2ef28-6577-4f62-94bc-3cb1b5248189 is in state STARTED 2026-04-05 01:05:30.672183 | orchestrator | 2026-04-05 01:05:30 | INFO  | Task a08d6d8a-a2e0-443e-ba8e-aa7bb227cfe9 is in state STARTED 2026-04-05 01:05:30.673087 | orchestrator | 2026-04-05 01:05:30 | INFO  | Task 93f2e915-87ca-427c-9e09-bf42bd2a4e4e is in state STARTED 2026-04-05 01:05:30.673911 | orchestrator | 2026-04-05 01:05:30 | INFO  | Task 4a2602ed-76cc-4a67-94e6-216d83a097ad is in state STARTED 2026-04-05 01:05:30.673951 | orchestrator | 2026-04-05 01:05:30 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:05:33.718974 | orchestrator | 2026-04-05 01:05:33 | INFO  | Task b5e2ef28-6577-4f62-94bc-3cb1b5248189 is in state STARTED 2026-04-05 01:05:33.720631 | orchestrator | 2026-04-05 01:05:33 | INFO  | Task a08d6d8a-a2e0-443e-ba8e-aa7bb227cfe9 is in state STARTED 2026-04-05 01:05:33.722661 | orchestrator | 2026-04-05 01:05:33 | INFO  | Task 93f2e915-87ca-427c-9e09-bf42bd2a4e4e is in state STARTED 2026-04-05 01:05:33.724358 | orchestrator | 2026-04-05 01:05:33 | INFO  | Task 4a2602ed-76cc-4a67-94e6-216d83a097ad is in state STARTED 2026-04-05 01:05:33.724664 | orchestrator | 2026-04-05 01:05:33 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:05:36.780709 | orchestrator | 2026-04-05 01:05:36 | INFO  | Task b5e2ef28-6577-4f62-94bc-3cb1b5248189 is in state STARTED 2026-04-05 01:05:36.794434 | orchestrator | 2026-04-05 01:05:36 | INFO  | Task a08d6d8a-a2e0-443e-ba8e-aa7bb227cfe9 is in state STARTED 2026-04-05 01:05:36.797135 | orchestrator | 2026-04-05 01:05:36 | INFO  | Task 93f2e915-87ca-427c-9e09-bf42bd2a4e4e is in state STARTED 2026-04-05 01:05:36.799154 | orchestrator | 2026-04-05 01:05:36 | INFO  | Task 6e1f3680-63a0-4689-a8e3-fccf945973d0 is in state STARTED 2026-04-05 01:05:36.801740 | orchestrator | 2026-04-05 01:05:36 | INFO  | Task 4a2602ed-76cc-4a67-94e6-216d83a097ad is in state SUCCESS 2026-04-05 01:05:36.803727 | orchestrator | 2026-04-05 01:05:36.803766 | orchestrator | 2026-04-05 01:05:36.803779 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-05 01:05:36.803792 | orchestrator | 2026-04-05 01:05:36.803803 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-05 01:05:36.803815 | orchestrator | Sunday 05 April 2026 01:02:07 +0000 (0:00:00.364) 0:00:00.364 ********** 2026-04-05 01:05:36.803827 | orchestrator | ok: [testbed-node-0] 2026-04-05 01:05:36.803839 | orchestrator | ok: [testbed-node-1] 2026-04-05 01:05:36.803850 | orchestrator | ok: [testbed-node-2] 2026-04-05 01:05:36.803861 | orchestrator | 2026-04-05 01:05:36.803872 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-05 01:05:36.803883 | orchestrator | Sunday 05 April 2026 01:02:08 +0000 (0:00:00.395) 0:00:00.760 ********** 2026-04-05 01:05:36.803894 | orchestrator | ok: [testbed-node-0] => (item=enable_glance_True) 2026-04-05 01:05:36.803906 | orchestrator | ok: [testbed-node-1] => (item=enable_glance_True) 2026-04-05 01:05:36.803917 | orchestrator | ok: [testbed-node-2] => (item=enable_glance_True) 2026-04-05 01:05:36.803928 | orchestrator | 2026-04-05 01:05:36.803939 | orchestrator | PLAY [Apply role glance] ******************************************************* 2026-04-05 01:05:36.803950 | orchestrator | 2026-04-05 01:05:36.804003 | orchestrator | TASK [glance : include_tasks] ************************************************** 2026-04-05 01:05:36.804071 | orchestrator | Sunday 05 April 2026 01:02:08 +0000 (0:00:00.372) 0:00:01.132 ********** 2026-04-05 01:05:36.804085 | orchestrator | included: /ansible/roles/glance/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 01:05:36.804098 | orchestrator | 2026-04-05 01:05:36.804109 | orchestrator | TASK [service-ks-register : glance | Creating services] ************************ 2026-04-05 01:05:36.804120 | orchestrator | Sunday 05 April 2026 01:02:09 +0000 (0:00:00.856) 0:00:01.989 ********** 2026-04-05 01:05:36.804131 | orchestrator | changed: [testbed-node-0] => (item=glance (image)) 2026-04-05 01:05:36.804141 | orchestrator | 2026-04-05 01:05:36.804152 | orchestrator | TASK [service-ks-register : glance | Creating endpoints] *********************** 2026-04-05 01:05:36.804163 | orchestrator | Sunday 05 April 2026 01:02:23 +0000 (0:00:14.032) 0:00:16.021 ********** 2026-04-05 01:05:36.804174 | orchestrator | changed: [testbed-node-0] => (item=glance -> https://api-int.testbed.osism.xyz:9292 -> internal) 2026-04-05 01:05:36.804186 | orchestrator | changed: [testbed-node-0] => (item=glance -> https://api.testbed.osism.xyz:9292 -> public) 2026-04-05 01:05:36.804197 | orchestrator | 2026-04-05 01:05:36.804208 | orchestrator | TASK [service-ks-register : glance | Creating projects] ************************ 2026-04-05 01:05:36.804219 | orchestrator | Sunday 05 April 2026 01:02:31 +0000 (0:00:07.778) 0:00:23.800 ********** 2026-04-05 01:05:36.804230 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-04-05 01:05:36.804241 | orchestrator | 2026-04-05 01:05:36.804252 | orchestrator | TASK [service-ks-register : glance | Creating users] *************************** 2026-04-05 01:05:36.804263 | orchestrator | Sunday 05 April 2026 01:02:34 +0000 (0:00:03.875) 0:00:27.675 ********** 2026-04-05 01:05:36.804274 | orchestrator | changed: [testbed-node-0] => (item=glance -> service) 2026-04-05 01:05:36.804286 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-04-05 01:05:36.804297 | orchestrator | 2026-04-05 01:05:36.804308 | orchestrator | TASK [service-ks-register : glance | Creating roles] *************************** 2026-04-05 01:05:36.804321 | orchestrator | Sunday 05 April 2026 01:02:39 +0000 (0:00:04.936) 0:00:32.612 ********** 2026-04-05 01:05:36.804336 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-04-05 01:05:36.804349 | orchestrator | 2026-04-05 01:05:36.804361 | orchestrator | TASK [service-ks-register : glance | Granting user roles] ********************** 2026-04-05 01:05:36.804374 | orchestrator | Sunday 05 April 2026 01:02:43 +0000 (0:00:03.618) 0:00:36.230 ********** 2026-04-05 01:05:36.804388 | orchestrator | changed: [testbed-node-0] => (item=glance -> service -> admin) 2026-04-05 01:05:36.804401 | orchestrator | 2026-04-05 01:05:36.804414 | orchestrator | TASK [glance : Ensuring config directories exist] ****************************** 2026-04-05 01:05:36.804427 | orchestrator | Sunday 05 April 2026 01:02:47 +0000 (0:00:04.189) 0:00:40.420 ********** 2026-04-05 01:05:36.804463 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-04-05 01:05:36.804498 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-04-05 01:05:36.804515 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-04-05 01:05:36.804531 | orchestrator | 2026-04-05 01:05:36.804544 | orchestrator | TASK [glance : include_tasks] ************************************************** 2026-04-05 01:05:36.804564 | orchestrator | Sunday 05 April 2026 01:02:52 +0000 (0:00:04.446) 0:00:44.866 ********** 2026-04-05 01:05:36.804577 | orchestrator | included: /ansible/roles/glance/tasks/external_ceph.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 01:05:36.804590 | orchestrator | 2026-04-05 01:05:36.804603 | orchestrator | TASK [glance : Ensuring glance service ceph config subdir exists] ************** 2026-04-05 01:05:36.804622 | orchestrator | Sunday 05 April 2026 01:02:52 +0000 (0:00:00.666) 0:00:45.533 ********** 2026-04-05 01:05:36.804636 | orchestrator | changed: [testbed-node-1] 2026-04-05 01:05:36.804649 | orchestrator | changed: [testbed-node-0] 2026-04-05 01:05:36.804665 | orchestrator | changed: [testbed-node-2] 2026-04-05 01:05:36.804683 | orchestrator | 2026-04-05 01:05:36.804711 | orchestrator | TASK [glance : Copy over multiple ceph configs for Glance] ********************* 2026-04-05 01:05:36.804729 | orchestrator | Sunday 05 April 2026 01:02:58 +0000 (0:00:05.822) 0:00:51.356 ********** 2026-04-05 01:05:36.804746 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-04-05 01:05:36.804764 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-04-05 01:05:36.804781 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-04-05 01:05:36.804798 | orchestrator | 2026-04-05 01:05:36.804817 | orchestrator | TASK [glance : Copy over ceph Glance keyrings] ********************************* 2026-04-05 01:05:36.804844 | orchestrator | Sunday 05 April 2026 01:03:01 +0000 (0:00:02.566) 0:00:53.923 ********** 2026-04-05 01:05:36.804864 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-04-05 01:05:36.804882 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-04-05 01:05:36.804901 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-04-05 01:05:36.804920 | orchestrator | 2026-04-05 01:05:36.804938 | orchestrator | TASK [glance : Ensuring config directory has correct owner and permission] ***** 2026-04-05 01:05:36.804956 | orchestrator | Sunday 05 April 2026 01:03:02 +0000 (0:00:01.445) 0:00:55.368 ********** 2026-04-05 01:05:36.804975 | orchestrator | ok: [testbed-node-0] 2026-04-05 01:05:36.804996 | orchestrator | ok: [testbed-node-1] 2026-04-05 01:05:36.805015 | orchestrator | ok: [testbed-node-2] 2026-04-05 01:05:36.805034 | orchestrator | 2026-04-05 01:05:36.805119 | orchestrator | TASK [glance : Check if policies shall be overwritten] ************************* 2026-04-05 01:05:36.805139 | orchestrator | Sunday 05 April 2026 01:03:03 +0000 (0:00:00.771) 0:00:56.140 ********** 2026-04-05 01:05:36.805159 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:05:36.805178 | orchestrator | 2026-04-05 01:05:36.805197 | orchestrator | TASK [glance : Set glance policy file] ***************************************** 2026-04-05 01:05:36.805216 | orchestrator | Sunday 05 April 2026 01:03:03 +0000 (0:00:00.147) 0:00:56.288 ********** 2026-04-05 01:05:36.805235 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:05:36.805254 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:05:36.805273 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:05:36.805291 | orchestrator | 2026-04-05 01:05:36.805311 | orchestrator | TASK [glance : include_tasks] ************************************************** 2026-04-05 01:05:36.805330 | orchestrator | Sunday 05 April 2026 01:03:03 +0000 (0:00:00.279) 0:00:56.567 ********** 2026-04-05 01:05:36.805348 | orchestrator | included: /ansible/roles/glance/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 01:05:36.805367 | orchestrator | 2026-04-05 01:05:36.805387 | orchestrator | TASK [service-cert-copy : glance | Copying over extra CA certificates] ********* 2026-04-05 01:05:36.805405 | orchestrator | Sunday 05 April 2026 01:03:04 +0000 (0:00:00.798) 0:00:57.365 ********** 2026-04-05 01:05:36.805427 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-04-05 01:05:36.805486 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-04-05 01:05:36.805510 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-04-05 01:05:36.805541 | orchestrator | 2026-04-05 01:05:36.805560 | orchestrator | TASK [service-cert-copy : glance | Copying over backend internal TLS certificate] *** 2026-04-05 01:05:36.805580 | orchestrator | Sunday 05 April 2026 01:03:08 +0000 (0:00:03.880) 0:01:01.246 ********** 2026-04-05 01:05:36.805620 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-04-05 01:05:36.805642 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:05:36.805662 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-04-05 01:05:36.805692 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:05:36.805721 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-04-05 01:05:36.805742 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:05:36.805760 | orchestrator | 2026-04-05 01:05:36.805779 | orchestrator | TASK [service-cert-copy : glance | Copying over backend internal TLS key] ****** 2026-04-05 01:05:36.805798 | orchestrator | Sunday 05 April 2026 01:03:11 +0000 (0:00:03.101) 0:01:04.347 ********** 2026-04-05 01:05:36.805826 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-04-05 01:05:36.805858 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:05:36.805878 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-04-05 01:05:36.805898 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:05:36.805946 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-04-05 01:05:36.805970 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:05:36.805989 | orchestrator | 2026-04-05 01:05:36.806007 | orchestrator | TASK [glance : Creating TLS backend PEM File] ********************************** 2026-04-05 01:05:36.806121 | orchestrator | Sunday 05 April 2026 01:03:15 +0000 (0:00:03.700) 0:01:08.048 ********** 2026-04-05 01:05:36.806143 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:05:36.806175 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:05:36.806197 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:05:36.806217 | orchestrator | 2026-04-05 01:05:36.806235 | orchestrator | TASK [glance : Copying over config.json files for services] ******************** 2026-04-05 01:05:36.806247 | orchestrator | Sunday 05 April 2026 01:03:19 +0000 (0:00:04.535) 0:01:12.583 ********** 2026-04-05 01:05:36.806259 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-04-05 01:05:36.806290 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-04-05 01:05:36.806304 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-04-05 01:05:36.806323 | orchestrator | 2026-04-05 01:05:36.806334 | orchestrator | TASK [glance : Copying over glance-api.conf] *********************************** 2026-04-05 01:05:36.806345 | orchestrator | Sunday 05 April 2026 01:03:25 +0000 (0:00:05.439) 0:01:18.022 ********** 2026-04-05 01:05:36.806356 | orchestrator | changed: [testbed-node-0] 2026-04-05 01:05:36.806367 | orchestrator | changed: [testbed-node-1] 2026-04-05 01:05:36.806377 | orchestrator | changed: [testbed-node-2] 2026-04-05 01:05:36.806388 | orchestrator | 2026-04-05 01:05:36.806399 | orchestrator | TASK [glance : Copying over glance-cache.conf for glance_api] ****************** 2026-04-05 01:05:36.806409 | orchestrator | Sunday 05 April 2026 01:03:34 +0000 (0:00:08.985) 0:01:27.007 ********** 2026-04-05 01:05:36.806420 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:05:36.806431 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:05:36.806441 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:05:36.806452 | orchestrator | 2026-04-05 01:05:36.806463 | orchestrator | TASK [glance : Copying over glance-swift.conf for glance_api] ****************** 2026-04-05 01:05:36.806474 | orchestrator | Sunday 05 April 2026 01:03:40 +0000 (0:00:05.992) 0:01:33.000 ********** 2026-04-05 01:05:36.806484 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:05:36.806495 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:05:36.806506 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:05:36.806516 | orchestrator | 2026-04-05 01:05:36.806527 | orchestrator | TASK [glance : Copying over glance-image-import.conf] ************************** 2026-04-05 01:05:36.806538 | orchestrator | Sunday 05 April 2026 01:03:43 +0000 (0:00:03.586) 0:01:36.586 ********** 2026-04-05 01:05:36.806549 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:05:36.806559 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:05:36.806575 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:05:36.806587 | orchestrator | 2026-04-05 01:05:36.806598 | orchestrator | TASK [glance : Copying over property-protections-rules.conf] ******************* 2026-04-05 01:05:36.806609 | orchestrator | Sunday 05 April 2026 01:03:48 +0000 (0:00:05.080) 0:01:41.666 ********** 2026-04-05 01:05:36.806619 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:05:36.806630 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:05:36.806640 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:05:36.806651 | orchestrator | 2026-04-05 01:05:36.806661 | orchestrator | TASK [glance : Copying over existing policy file] ****************************** 2026-04-05 01:05:36.806672 | orchestrator | Sunday 05 April 2026 01:03:54 +0000 (0:00:05.307) 0:01:46.974 ********** 2026-04-05 01:05:36.806683 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:05:36.806694 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:05:36.806705 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:05:36.806722 | orchestrator | 2026-04-05 01:05:36.806733 | orchestrator | TASK [glance : Copying over glance-haproxy-tls.cfg] **************************** 2026-04-05 01:05:36.806744 | orchestrator | Sunday 05 April 2026 01:03:54 +0000 (0:00:00.564) 0:01:47.539 ********** 2026-04-05 01:05:36.806760 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2026-04-05 01:05:36.806771 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:05:36.806782 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2026-04-05 01:05:36.806793 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:05:36.806804 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2026-04-05 01:05:36.806815 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:05:36.806825 | orchestrator | 2026-04-05 01:05:36.806836 | orchestrator | TASK [glance : Generating 'hostnqn' file for glance_api] *********************** 2026-04-05 01:05:36.806847 | orchestrator | Sunday 05 April 2026 01:04:00 +0000 (0:00:05.379) 0:01:52.918 ********** 2026-04-05 01:05:36.806858 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:05:36.806868 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:05:36.806879 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:05:36.806890 | orchestrator | 2026-04-05 01:05:36.806901 | orchestrator | TASK [glance : Generating 'hostid' file for glance_api] ************************ 2026-04-05 01:05:36.806911 | orchestrator | Sunday 05 April 2026 01:04:09 +0000 (0:00:09.024) 0:02:01.943 ********** 2026-04-05 01:05:36.806922 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:05:36.806933 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:05:36.806943 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:05:36.806954 | orchestrator | 2026-04-05 01:05:36.806965 | orchestrator | TASK [glance : Check glance containers] **************************************** 2026-04-05 01:05:36.806976 | orchestrator | Sunday 05 April 2026 01:04:13 +0000 (0:00:04.501) 0:02:06.445 ********** 2026-04-05 01:05:36.806987 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-04-05 01:05:36.807013 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-04-05 01:05:36.807033 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-04-05 01:05:36.807229 | orchestrator | 2026-04-05 01:05:36.807250 | orchestrator | TASK [glance : include_tasks] ************************************************** 2026-04-05 01:05:36.807262 | orchestrator | Sunday 05 April 2026 01:04:18 +0000 (0:00:04.459) 0:02:10.904 ********** 2026-04-05 01:05:36.807273 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:05:36.807284 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:05:36.807295 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:05:36.807305 | orchestrator | 2026-04-05 01:05:36.807316 | orchestrator | TASK [glance : Creating Glance database] *************************************** 2026-04-05 01:05:36.807327 | orchestrator | Sunday 05 April 2026 01:04:19 +0000 (0:00:00.867) 0:02:11.772 ********** 2026-04-05 01:05:36.807337 | orchestrator | changed: [testbed-node-0] 2026-04-05 01:05:36.807348 | orchestrator | 2026-04-05 01:05:36.807358 | orchestrator | TASK [glance : Creating Glance database user and setting permissions] ********** 2026-04-05 01:05:36.807366 | orchestrator | Sunday 05 April 2026 01:04:21 +0000 (0:00:02.630) 0:02:14.402 ********** 2026-04-05 01:05:36.807374 | orchestrator | changed: [testbed-node-0] 2026-04-05 01:05:36.807391 | orchestrator | 2026-04-05 01:05:36.807399 | orchestrator | TASK [glance : Enable log_bin_trust_function_creators function] **************** 2026-04-05 01:05:36.807407 | orchestrator | Sunday 05 April 2026 01:04:24 +0000 (0:00:02.914) 0:02:17.317 ********** 2026-04-05 01:05:36.807415 | orchestrator | changed: [testbed-node-0] 2026-04-05 01:05:36.807423 | orchestrator | 2026-04-05 01:05:36.807430 | orchestrator | TASK [glance : Running Glance bootstrap container] ***************************** 2026-04-05 01:05:36.807447 | orchestrator | Sunday 05 April 2026 01:04:27 +0000 (0:00:02.419) 0:02:19.737 ********** 2026-04-05 01:05:36.807455 | orchestrator | changed: [testbed-node-0] 2026-04-05 01:05:36.807463 | orchestrator | 2026-04-05 01:05:36.807471 | orchestrator | TASK [glance : Disable log_bin_trust_function_creators function] *************** 2026-04-05 01:05:36.807479 | orchestrator | Sunday 05 April 2026 01:04:58 +0000 (0:00:31.137) 0:02:50.875 ********** 2026-04-05 01:05:36.807487 | orchestrator | changed: [testbed-node-0] 2026-04-05 01:05:36.807494 | orchestrator | 2026-04-05 01:05:36.807512 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2026-04-05 01:05:36.807521 | orchestrator | Sunday 05 April 2026 01:05:00 +0000 (0:00:01.990) 0:02:52.865 ********** 2026-04-05 01:05:36.807528 | orchestrator | 2026-04-05 01:05:36.807536 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2026-04-05 01:05:36.807544 | orchestrator | Sunday 05 April 2026 01:05:00 +0000 (0:00:00.064) 0:02:52.929 ********** 2026-04-05 01:05:36.807552 | orchestrator | 2026-04-05 01:05:36.807560 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2026-04-05 01:05:36.807567 | orchestrator | Sunday 05 April 2026 01:05:00 +0000 (0:00:00.061) 0:02:52.991 ********** 2026-04-05 01:05:36.807575 | orchestrator | 2026-04-05 01:05:36.807583 | orchestrator | RUNNING HANDLER [glance : Restart glance-api container] ************************ 2026-04-05 01:05:36.807591 | orchestrator | Sunday 05 April 2026 01:05:00 +0000 (0:00:00.060) 0:02:53.051 ********** 2026-04-05 01:05:36.807598 | orchestrator | changed: [testbed-node-0] 2026-04-05 01:05:36.807606 | orchestrator | changed: [testbed-node-1] 2026-04-05 01:05:36.807614 | orchestrator | changed: [testbed-node-2] 2026-04-05 01:05:36.807621 | orchestrator | 2026-04-05 01:05:36.807635 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-05 01:05:36.807644 | orchestrator | testbed-node-0 : ok=26  changed=18  unreachable=0 failed=0 skipped=14  rescued=0 ignored=0 2026-04-05 01:05:36.807653 | orchestrator | testbed-node-1 : ok=15  changed=9  unreachable=0 failed=0 skipped=13  rescued=0 ignored=0 2026-04-05 01:05:36.807661 | orchestrator | testbed-node-2 : ok=15  changed=9  unreachable=0 failed=0 skipped=13  rescued=0 ignored=0 2026-04-05 01:05:36.807669 | orchestrator | 2026-04-05 01:05:36.807677 | orchestrator | 2026-04-05 01:05:36.807684 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-05 01:05:36.807692 | orchestrator | Sunday 05 April 2026 01:05:35 +0000 (0:00:35.107) 0:03:28.159 ********** 2026-04-05 01:05:36.807700 | orchestrator | =============================================================================== 2026-04-05 01:05:36.807708 | orchestrator | glance : Restart glance-api container ---------------------------------- 35.11s 2026-04-05 01:05:36.807715 | orchestrator | glance : Running Glance bootstrap container ---------------------------- 31.14s 2026-04-05 01:05:36.807723 | orchestrator | service-ks-register : glance | Creating services ----------------------- 14.03s 2026-04-05 01:05:36.807731 | orchestrator | glance : Generating 'hostnqn' file for glance_api ----------------------- 9.02s 2026-04-05 01:05:36.807739 | orchestrator | glance : Copying over glance-api.conf ----------------------------------- 8.99s 2026-04-05 01:05:36.807747 | orchestrator | service-ks-register : glance | Creating endpoints ----------------------- 7.78s 2026-04-05 01:05:36.807755 | orchestrator | glance : Copying over glance-cache.conf for glance_api ------------------ 5.99s 2026-04-05 01:05:36.807762 | orchestrator | glance : Ensuring glance service ceph config subdir exists -------------- 5.82s 2026-04-05 01:05:36.807775 | orchestrator | glance : Copying over config.json files for services -------------------- 5.44s 2026-04-05 01:05:36.807783 | orchestrator | glance : Copying over glance-haproxy-tls.cfg ---------------------------- 5.38s 2026-04-05 01:05:36.807791 | orchestrator | glance : Copying over property-protections-rules.conf ------------------- 5.31s 2026-04-05 01:05:36.807798 | orchestrator | glance : Copying over glance-image-import.conf -------------------------- 5.08s 2026-04-05 01:05:36.807806 | orchestrator | service-ks-register : glance | Creating users --------------------------- 4.94s 2026-04-05 01:05:36.807814 | orchestrator | glance : Creating TLS backend PEM File ---------------------------------- 4.54s 2026-04-05 01:05:36.807821 | orchestrator | glance : Generating 'hostid' file for glance_api ------------------------ 4.50s 2026-04-05 01:05:36.807829 | orchestrator | glance : Check glance containers ---------------------------------------- 4.46s 2026-04-05 01:05:36.807837 | orchestrator | glance : Ensuring config directories exist ------------------------------ 4.45s 2026-04-05 01:05:36.807844 | orchestrator | service-ks-register : glance | Granting user roles ---------------------- 4.19s 2026-04-05 01:05:36.807852 | orchestrator | service-cert-copy : glance | Copying over extra CA certificates --------- 3.88s 2026-04-05 01:05:36.807860 | orchestrator | service-ks-register : glance | Creating projects ------------------------ 3.88s 2026-04-05 01:05:36.807868 | orchestrator | 2026-04-05 01:05:36 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:05:39.860422 | orchestrator | 2026-04-05 01:05:39 | INFO  | Task b5e2ef28-6577-4f62-94bc-3cb1b5248189 is in state STARTED 2026-04-05 01:05:39.862812 | orchestrator | 2026-04-05 01:05:39 | INFO  | Task a08d6d8a-a2e0-443e-ba8e-aa7bb227cfe9 is in state STARTED 2026-04-05 01:05:39.865853 | orchestrator | 2026-04-05 01:05:39 | INFO  | Task 93f2e915-87ca-427c-9e09-bf42bd2a4e4e is in state STARTED 2026-04-05 01:05:39.867811 | orchestrator | 2026-04-05 01:05:39 | INFO  | Task 6e1f3680-63a0-4689-a8e3-fccf945973d0 is in state STARTED 2026-04-05 01:05:39.868010 | orchestrator | 2026-04-05 01:05:39 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:05:42.912586 | orchestrator | 2026-04-05 01:05:42 | INFO  | Task b5e2ef28-6577-4f62-94bc-3cb1b5248189 is in state STARTED 2026-04-05 01:05:42.912856 | orchestrator | 2026-04-05 01:05:42 | INFO  | Task a08d6d8a-a2e0-443e-ba8e-aa7bb227cfe9 is in state STARTED 2026-04-05 01:05:42.914190 | orchestrator | 2026-04-05 01:05:42 | INFO  | Task 93f2e915-87ca-427c-9e09-bf42bd2a4e4e is in state STARTED 2026-04-05 01:05:42.915118 | orchestrator | 2026-04-05 01:05:42 | INFO  | Task 6e1f3680-63a0-4689-a8e3-fccf945973d0 is in state STARTED 2026-04-05 01:05:42.915155 | orchestrator | 2026-04-05 01:05:42 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:05:45.968839 | orchestrator | 2026-04-05 01:05:45 | INFO  | Task b5e2ef28-6577-4f62-94bc-3cb1b5248189 is in state STARTED 2026-04-05 01:05:45.970510 | orchestrator | 2026-04-05 01:05:45 | INFO  | Task a08d6d8a-a2e0-443e-ba8e-aa7bb227cfe9 is in state STARTED 2026-04-05 01:05:45.972849 | orchestrator | 2026-04-05 01:05:45 | INFO  | Task 93f2e915-87ca-427c-9e09-bf42bd2a4e4e is in state STARTED 2026-04-05 01:05:45.974508 | orchestrator | 2026-04-05 01:05:45 | INFO  | Task 6e1f3680-63a0-4689-a8e3-fccf945973d0 is in state STARTED 2026-04-05 01:05:45.974542 | orchestrator | 2026-04-05 01:05:45 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:05:49.008309 | orchestrator | 2026-04-05 01:05:49 | INFO  | Task b5e2ef28-6577-4f62-94bc-3cb1b5248189 is in state STARTED 2026-04-05 01:05:49.008395 | orchestrator | 2026-04-05 01:05:49 | INFO  | Task a08d6d8a-a2e0-443e-ba8e-aa7bb227cfe9 is in state STARTED 2026-04-05 01:05:49.009147 | orchestrator | 2026-04-05 01:05:49 | INFO  | Task 93f2e915-87ca-427c-9e09-bf42bd2a4e4e is in state STARTED 2026-04-05 01:05:49.011351 | orchestrator | 2026-04-05 01:05:49 | INFO  | Task 6e1f3680-63a0-4689-a8e3-fccf945973d0 is in state STARTED 2026-04-05 01:05:49.011417 | orchestrator | 2026-04-05 01:05:49 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:05:52.058418 | orchestrator | 2026-04-05 01:05:52 | INFO  | Task b5e2ef28-6577-4f62-94bc-3cb1b5248189 is in state STARTED 2026-04-05 01:05:52.061859 | orchestrator | 2026-04-05 01:05:52 | INFO  | Task a08d6d8a-a2e0-443e-ba8e-aa7bb227cfe9 is in state STARTED 2026-04-05 01:05:52.064200 | orchestrator | 2026-04-05 01:05:52 | INFO  | Task 93f2e915-87ca-427c-9e09-bf42bd2a4e4e is in state STARTED 2026-04-05 01:05:52.066532 | orchestrator | 2026-04-05 01:05:52 | INFO  | Task 6e1f3680-63a0-4689-a8e3-fccf945973d0 is in state STARTED 2026-04-05 01:05:52.066841 | orchestrator | 2026-04-05 01:05:52 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:05:55.097531 | orchestrator | 2026-04-05 01:05:55 | INFO  | Task b5e2ef28-6577-4f62-94bc-3cb1b5248189 is in state STARTED 2026-04-05 01:05:55.098996 | orchestrator | 2026-04-05 01:05:55 | INFO  | Task a08d6d8a-a2e0-443e-ba8e-aa7bb227cfe9 is in state STARTED 2026-04-05 01:05:55.100346 | orchestrator | 2026-04-05 01:05:55 | INFO  | Task 93f2e915-87ca-427c-9e09-bf42bd2a4e4e is in state STARTED 2026-04-05 01:05:55.101770 | orchestrator | 2026-04-05 01:05:55 | INFO  | Task 6e1f3680-63a0-4689-a8e3-fccf945973d0 is in state STARTED 2026-04-05 01:05:55.101833 | orchestrator | 2026-04-05 01:05:55 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:05:58.146288 | orchestrator | 2026-04-05 01:05:58 | INFO  | Task b5e2ef28-6577-4f62-94bc-3cb1b5248189 is in state STARTED 2026-04-05 01:05:58.150564 | orchestrator | 2026-04-05 01:05:58 | INFO  | Task a08d6d8a-a2e0-443e-ba8e-aa7bb227cfe9 is in state STARTED 2026-04-05 01:05:58.153817 | orchestrator | 2026-04-05 01:05:58 | INFO  | Task 93f2e915-87ca-427c-9e09-bf42bd2a4e4e is in state STARTED 2026-04-05 01:05:58.156534 | orchestrator | 2026-04-05 01:05:58 | INFO  | Task 6e1f3680-63a0-4689-a8e3-fccf945973d0 is in state STARTED 2026-04-05 01:05:58.156803 | orchestrator | 2026-04-05 01:05:58 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:06:01.197479 | orchestrator | 2026-04-05 01:06:01 | INFO  | Task b5e2ef28-6577-4f62-94bc-3cb1b5248189 is in state STARTED 2026-04-05 01:06:01.199616 | orchestrator | 2026-04-05 01:06:01 | INFO  | Task a08d6d8a-a2e0-443e-ba8e-aa7bb227cfe9 is in state STARTED 2026-04-05 01:06:01.202726 | orchestrator | 2026-04-05 01:06:01 | INFO  | Task 93f2e915-87ca-427c-9e09-bf42bd2a4e4e is in state STARTED 2026-04-05 01:06:01.205017 | orchestrator | 2026-04-05 01:06:01 | INFO  | Task 6e1f3680-63a0-4689-a8e3-fccf945973d0 is in state STARTED 2026-04-05 01:06:01.205235 | orchestrator | 2026-04-05 01:06:01 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:06:04.258491 | orchestrator | 2026-04-05 01:06:04 | INFO  | Task b5e2ef28-6577-4f62-94bc-3cb1b5248189 is in state STARTED 2026-04-05 01:06:04.259272 | orchestrator | 2026-04-05 01:06:04 | INFO  | Task a08d6d8a-a2e0-443e-ba8e-aa7bb227cfe9 is in state STARTED 2026-04-05 01:06:04.263708 | orchestrator | 2026-04-05 01:06:04.263762 | orchestrator | 2026-04-05 01:06:04 | INFO  | Task 93f2e915-87ca-427c-9e09-bf42bd2a4e4e is in state SUCCESS 2026-04-05 01:06:04.265902 | orchestrator | 2026-04-05 01:06:04.265987 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-05 01:06:04.266185 | orchestrator | 2026-04-05 01:06:04.266212 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-05 01:06:04.266223 | orchestrator | Sunday 05 April 2026 01:02:42 +0000 (0:00:00.273) 0:00:00.273 ********** 2026-04-05 01:06:04.266233 | orchestrator | ok: [testbed-node-0] 2026-04-05 01:06:04.266269 | orchestrator | ok: [testbed-node-1] 2026-04-05 01:06:04.266282 | orchestrator | ok: [testbed-node-2] 2026-04-05 01:06:04.266295 | orchestrator | 2026-04-05 01:06:04.266307 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-05 01:06:04.266318 | orchestrator | Sunday 05 April 2026 01:02:42 +0000 (0:00:00.270) 0:00:00.543 ********** 2026-04-05 01:06:04.266331 | orchestrator | ok: [testbed-node-0] => (item=enable_cinder_True) 2026-04-05 01:06:04.266343 | orchestrator | ok: [testbed-node-1] => (item=enable_cinder_True) 2026-04-05 01:06:04.266359 | orchestrator | ok: [testbed-node-2] => (item=enable_cinder_True) 2026-04-05 01:06:04.266377 | orchestrator | 2026-04-05 01:06:04.266509 | orchestrator | PLAY [Apply role cinder] ******************************************************* 2026-04-05 01:06:04.266522 | orchestrator | 2026-04-05 01:06:04.266534 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-04-05 01:06:04.266552 | orchestrator | Sunday 05 April 2026 01:02:43 +0000 (0:00:00.294) 0:00:00.838 ********** 2026-04-05 01:06:04.266569 | orchestrator | included: /ansible/roles/cinder/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 01:06:04.266588 | orchestrator | 2026-04-05 01:06:04.266607 | orchestrator | TASK [service-ks-register : cinder | Creating services] ************************ 2026-04-05 01:06:04.266621 | orchestrator | Sunday 05 April 2026 01:02:43 +0000 (0:00:00.603) 0:00:01.442 ********** 2026-04-05 01:06:04.266631 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 (volumev3)) 2026-04-05 01:06:04.266641 | orchestrator | 2026-04-05 01:06:04.266650 | orchestrator | TASK [service-ks-register : cinder | Creating endpoints] *********************** 2026-04-05 01:06:04.266660 | orchestrator | Sunday 05 April 2026 01:02:48 +0000 (0:00:04.664) 0:00:06.106 ********** 2026-04-05 01:06:04.266670 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 -> https://api-int.testbed.osism.xyz:8776/v3/%(tenant_id)s -> internal) 2026-04-05 01:06:04.266679 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 -> https://api.testbed.osism.xyz:8776/v3/%(tenant_id)s -> public) 2026-04-05 01:06:04.266689 | orchestrator | 2026-04-05 01:06:04.266698 | orchestrator | TASK [service-ks-register : cinder | Creating projects] ************************ 2026-04-05 01:06:04.266708 | orchestrator | Sunday 05 April 2026 01:02:55 +0000 (0:00:07.310) 0:00:13.417 ********** 2026-04-05 01:06:04.266717 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-04-05 01:06:04.266727 | orchestrator | 2026-04-05 01:06:04.266736 | orchestrator | TASK [service-ks-register : cinder | Creating users] *************************** 2026-04-05 01:06:04.266746 | orchestrator | Sunday 05 April 2026 01:02:59 +0000 (0:00:03.967) 0:00:17.385 ********** 2026-04-05 01:06:04.266755 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service) 2026-04-05 01:06:04.266767 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-04-05 01:06:04.266784 | orchestrator | 2026-04-05 01:06:04.266801 | orchestrator | TASK [service-ks-register : cinder | Creating roles] *************************** 2026-04-05 01:06:04.266817 | orchestrator | Sunday 05 April 2026 01:03:03 +0000 (0:00:04.333) 0:00:21.718 ********** 2026-04-05 01:06:04.266833 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-04-05 01:06:04.266848 | orchestrator | 2026-04-05 01:06:04.266864 | orchestrator | TASK [service-ks-register : cinder | Granting user roles] ********************** 2026-04-05 01:06:04.266878 | orchestrator | Sunday 05 April 2026 01:03:07 +0000 (0:00:03.572) 0:00:25.291 ********** 2026-04-05 01:06:04.266894 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service -> admin) 2026-04-05 01:06:04.266909 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service -> service) 2026-04-05 01:06:04.266923 | orchestrator | 2026-04-05 01:06:04.266939 | orchestrator | TASK [cinder : Ensuring config directories exist] ****************************** 2026-04-05 01:06:04.266955 | orchestrator | Sunday 05 April 2026 01:03:16 +0000 (0:00:08.653) 0:00:33.944 ********** 2026-04-05 01:06:04.266976 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-04-05 01:06:04.267110 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-04-05 01:06:04.267182 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-04-05 01:06:04.267205 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-04-05 01:06:04.267226 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-04-05 01:06:04.267245 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-04-05 01:06:04.267277 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-04-05 01:06:04.267308 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-04-05 01:06:04.267321 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-04-05 01:06:04.267331 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-04-05 01:06:04.267341 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-04-05 01:06:04.267387 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-04-05 01:06:04.267405 | orchestrator | 2026-04-05 01:06:04.267416 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-04-05 01:06:04.267426 | orchestrator | Sunday 05 April 2026 01:03:19 +0000 (0:00:03.811) 0:00:37.756 ********** 2026-04-05 01:06:04.267435 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:06:04.267445 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:06:04.267454 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:06:04.267464 | orchestrator | 2026-04-05 01:06:04.267473 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-04-05 01:06:04.267483 | orchestrator | Sunday 05 April 2026 01:03:20 +0000 (0:00:00.361) 0:00:38.117 ********** 2026-04-05 01:06:04.267492 | orchestrator | included: /ansible/roles/cinder/tasks/external_ceph.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 01:06:04.267501 | orchestrator | 2026-04-05 01:06:04.267511 | orchestrator | TASK [cinder : Ensuring cinder service ceph config subdirs exists] ************* 2026-04-05 01:06:04.267527 | orchestrator | Sunday 05 April 2026 01:03:21 +0000 (0:00:01.263) 0:00:39.380 ********** 2026-04-05 01:06:04.267537 | orchestrator | changed: [testbed-node-0] => (item=cinder-volume) 2026-04-05 01:06:04.267547 | orchestrator | changed: [testbed-node-1] => (item=cinder-volume) 2026-04-05 01:06:04.267556 | orchestrator | changed: [testbed-node-2] => (item=cinder-volume) 2026-04-05 01:06:04.267566 | orchestrator | changed: [testbed-node-2] => (item=cinder-backup) 2026-04-05 01:06:04.267575 | orchestrator | changed: [testbed-node-1] => (item=cinder-backup) 2026-04-05 01:06:04.267585 | orchestrator | changed: [testbed-node-0] => (item=cinder-backup) 2026-04-05 01:06:04.267594 | orchestrator | 2026-04-05 01:06:04.267604 | orchestrator | TASK [cinder : Copying over multiple ceph.conf for cinder services] ************ 2026-04-05 01:06:04.267613 | orchestrator | Sunday 05 April 2026 01:03:24 +0000 (0:00:02.561) 0:00:41.941 ********** 2026-04-05 01:06:04.267629 | orchestrator | skipping: [testbed-node-1] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-04-05 01:06:04.267640 | orchestrator | skipping: [testbed-node-1] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-04-05 01:06:04.267661 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-04-05 01:06:04.267671 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-04-05 01:06:04.267689 | orchestrator | skipping: [testbed-node-2] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-04-05 01:06:04.267711 | orchestrator | skipping: [testbed-node-2] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-04-05 01:06:04.267722 | orchestrator | changed: [testbed-node-1] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-04-05 01:06:04.267738 | orchestrator | changed: [testbed-node-2] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-04-05 01:06:04.267749 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-04-05 01:06:04.267765 | orchestrator | changed: [testbed-node-2] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-04-05 01:06:04.267781 | orchestrator | changed: [testbed-node-1] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-04-05 01:06:04.267792 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-04-05 01:06:04.267802 | orchestrator | 2026-04-05 01:06:04.267817 | orchestrator | TASK [cinder : Copy over Ceph keyring files for cinder-volume] ***************** 2026-04-05 01:06:04.267827 | orchestrator | Sunday 05 April 2026 01:03:28 +0000 (0:00:04.375) 0:00:46.317 ********** 2026-04-05 01:06:04.267836 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2026-04-05 01:06:04.267846 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2026-04-05 01:06:04.267856 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2026-04-05 01:06:04.267865 | orchestrator | 2026-04-05 01:06:04.267875 | orchestrator | TASK [cinder : Copy over Ceph keyring files for cinder-backup] ***************** 2026-04-05 01:06:04.267884 | orchestrator | Sunday 05 April 2026 01:03:31 +0000 (0:00:02.506) 0:00:48.823 ********** 2026-04-05 01:06:04.267894 | orchestrator | changed: [testbed-node-1] => (item=ceph.client.cinder.keyring) 2026-04-05 01:06:04.267904 | orchestrator | changed: [testbed-node-0] => (item=ceph.client.cinder.keyring) 2026-04-05 01:06:04.267913 | orchestrator | changed: [testbed-node-2] => (item=ceph.client.cinder.keyring) 2026-04-05 01:06:04.267923 | orchestrator | changed: [testbed-node-0] => (item=ceph.client.cinder-backup.keyring) 2026-04-05 01:06:04.267932 | orchestrator | changed: [testbed-node-1] => (item=ceph.client.cinder-backup.keyring) 2026-04-05 01:06:04.267942 | orchestrator | changed: [testbed-node-2] => (item=ceph.client.cinder-backup.keyring) 2026-04-05 01:06:04.267951 | orchestrator | 2026-04-05 01:06:04.267961 | orchestrator | TASK [cinder : Ensuring config directory has correct owner and permission] ***** 2026-04-05 01:06:04.267976 | orchestrator | Sunday 05 April 2026 01:03:34 +0000 (0:00:03.163) 0:00:51.987 ********** 2026-04-05 01:06:04.267992 | orchestrator | ok: [testbed-node-0] => (item=cinder-volume) 2026-04-05 01:06:04.268007 | orchestrator | ok: [testbed-node-1] => (item=cinder-volume) 2026-04-05 01:06:04.268023 | orchestrator | ok: [testbed-node-2] => (item=cinder-volume) 2026-04-05 01:06:04.268038 | orchestrator | ok: [testbed-node-0] => (item=cinder-backup) 2026-04-05 01:06:04.268054 | orchestrator | ok: [testbed-node-1] => (item=cinder-backup) 2026-04-05 01:06:04.268069 | orchestrator | ok: [testbed-node-2] => (item=cinder-backup) 2026-04-05 01:06:04.268162 | orchestrator | 2026-04-05 01:06:04.268184 | orchestrator | TASK [cinder : Check if policies shall be overwritten] ************************* 2026-04-05 01:06:04.268201 | orchestrator | Sunday 05 April 2026 01:03:35 +0000 (0:00:01.338) 0:00:53.325 ********** 2026-04-05 01:06:04.268217 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:06:04.268233 | orchestrator | 2026-04-05 01:06:04.268250 | orchestrator | TASK [cinder : Set cinder policy file] ***************************************** 2026-04-05 01:06:04.268265 | orchestrator | Sunday 05 April 2026 01:03:36 +0000 (0:00:01.018) 0:00:54.344 ********** 2026-04-05 01:06:04.268282 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:06:04.268298 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:06:04.268315 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:06:04.268331 | orchestrator | 2026-04-05 01:06:04.268347 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-04-05 01:06:04.268365 | orchestrator | Sunday 05 April 2026 01:03:37 +0000 (0:00:00.512) 0:00:54.856 ********** 2026-04-05 01:06:04.268382 | orchestrator | included: /ansible/roles/cinder/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 01:06:04.268398 | orchestrator | 2026-04-05 01:06:04.268423 | orchestrator | TASK [service-cert-copy : cinder | Copying over extra CA certificates] ********* 2026-04-05 01:06:04.268441 | orchestrator | Sunday 05 April 2026 01:03:37 +0000 (0:00:00.788) 0:00:55.644 ********** 2026-04-05 01:06:04.268467 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-04-05 01:06:04.268499 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-04-05 01:06:04.268516 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-04-05 01:06:04.268534 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-04-05 01:06:04.268552 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-04-05 01:06:04.268578 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-04-05 01:06:04.268616 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-04-05 01:06:04.268633 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-04-05 01:06:04.268648 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-04-05 01:06:04.268662 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-04-05 01:06:04.268676 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-04-05 01:06:04.268968 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-04-05 01:06:04.269075 | orchestrator | 2026-04-05 01:06:04.269146 | orchestrator | TASK [service-cert-copy : cinder | Copying over backend internal TLS certificate] *** 2026-04-05 01:06:04.269167 | orchestrator | Sunday 05 April 2026 01:03:43 +0000 (0:00:05.522) 0:01:01.166 ********** 2026-04-05 01:06:04.269185 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-04-05 01:06:04.269205 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-05 01:06:04.269217 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-05 01:06:04.269228 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-05 01:06:04.269239 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:06:04.269265 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-04-05 01:06:04.269290 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-05 01:06:04.269301 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-05 01:06:04.269311 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-05 01:06:04.269321 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:06:04.269331 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-04-05 01:06:04.269348 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-05 01:06:04.269365 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-05 01:06:04.269380 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-05 01:06:04.269390 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:06:04.269400 | orchestrator | 2026-04-05 01:06:04.269410 | orchestrator | TASK [service-cert-copy : cinder | Copying over backend internal TLS key] ****** 2026-04-05 01:06:04.269420 | orchestrator | Sunday 05 April 2026 01:03:44 +0000 (0:00:01.334) 0:01:02.501 ********** 2026-04-05 01:06:04.269430 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-04-05 01:06:04.269440 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-05 01:06:04.269451 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-05 01:06:04.269474 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-05 01:06:04.269485 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:06:04.269500 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-04-05 01:06:04.269510 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-05 01:06:04.269521 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-05 01:06:04.269534 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-05 01:06:04.269545 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:06:04.269568 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-04-05 01:06:04.269585 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-05 01:06:04.269598 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-05 01:06:04.269610 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-05 01:06:04.269622 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:06:04.269634 | orchestrator | 2026-04-05 01:06:04.269646 | orchestrator | TASK [cinder : Copying over config.json files for services] ******************** 2026-04-05 01:06:04.269659 | orchestrator | Sunday 05 April 2026 01:03:46 +0000 (0:00:01.425) 0:01:03.926 ********** 2026-04-05 01:06:04.269671 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-04-05 01:06:04.269696 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-04-05 01:06:04.269714 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-04-05 01:06:04.269728 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-04-05 01:06:04.269742 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-04-05 01:06:04.269754 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-04-05 01:06:04.269772 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-04-05 01:06:04.269806 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-04-05 01:06:04.269825 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-04-05 01:06:04.269837 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-04-05 01:06:04.269850 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-04-05 01:06:04.269867 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-04-05 01:06:04.269897 | orchestrator | 2026-04-05 01:06:04.269910 | orchestrator | TASK [cinder : Copying over cinder-wsgi.conf] ********************************** 2026-04-05 01:06:04.269920 | orchestrator | Sunday 05 April 2026 01:03:51 +0000 (0:00:05.697) 0:01:09.624 ********** 2026-04-05 01:06:04.269930 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2026-04-05 01:06:04.269940 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2026-04-05 01:06:04.269954 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2026-04-05 01:06:04.269972 | orchestrator | 2026-04-05 01:06:04.269984 | orchestrator | TASK [cinder : Copying over cinder.conf] *************************************** 2026-04-05 01:06:04.269994 | orchestrator | Sunday 05 April 2026 01:03:55 +0000 (0:00:03.168) 0:01:12.792 ********** 2026-04-05 01:06:04.270011 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-04-05 01:06:04.270076 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-04-05 01:06:04.270115 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-04-05 01:06:04.270136 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-04-05 01:06:04.270165 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-04-05 01:06:04.270177 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-04-05 01:06:04.270205 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-04-05 01:06:04.270230 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-04-05 01:06:04.270246 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-04-05 01:06:04.270256 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-04-05 01:06:04.270277 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-04-05 01:06:04.270295 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-04-05 01:06:04.270313 | orchestrator | 2026-04-05 01:06:04.270335 | orchestrator | TASK [cinder : Generating 'hostnqn' file for cinder_volume] ******************** 2026-04-05 01:06:04.270346 | orchestrator | Sunday 05 April 2026 01:04:13 +0000 (0:00:18.741) 0:01:31.534 ********** 2026-04-05 01:06:04.270357 | orchestrator | changed: [testbed-node-1] 2026-04-05 01:06:04.270375 | orchestrator | changed: [testbed-node-0] 2026-04-05 01:06:04.270390 | orchestrator | changed: [testbed-node-2] 2026-04-05 01:06:04.270400 | orchestrator | 2026-04-05 01:06:04.270410 | orchestrator | TASK [cinder : Generating 'hostid' file for cinder_volume] ********************* 2026-04-05 01:06:04.270420 | orchestrator | Sunday 05 April 2026 01:04:15 +0000 (0:00:01.796) 0:01:33.331 ********** 2026-04-05 01:06:04.270430 | orchestrator | changed: [testbed-node-1] 2026-04-05 01:06:04.270439 | orchestrator | changed: [testbed-node-2] 2026-04-05 01:06:04.270455 | orchestrator | changed: [testbed-node-0] 2026-04-05 01:06:04.270472 | orchestrator | 2026-04-05 01:06:04.270482 | orchestrator | TASK [cinder : Copying over existing policy file] ****************************** 2026-04-05 01:06:04.270492 | orchestrator | Sunday 05 April 2026 01:04:17 +0000 (0:00:02.022) 0:01:35.353 ********** 2026-04-05 01:06:04.270507 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-04-05 01:06:04.270518 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-05 01:06:04.270544 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-05 01:06:04.270564 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-05 01:06:04.270580 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:06:04.270603 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-04-05 01:06:04.270620 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-05 01:06:04.270631 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-05 01:06:04.270648 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-05 01:06:04.270658 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:06:04.270667 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-04-05 01:06:04.270678 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-05 01:06:04.270694 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-05 01:06:04.270709 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-05 01:06:04.270719 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:06:04.270736 | orchestrator | 2026-04-05 01:06:04.270746 | orchestrator | TASK [cinder : Copying over nfs_shares files for cinder_volume] **************** 2026-04-05 01:06:04.270755 | orchestrator | Sunday 05 April 2026 01:04:19 +0000 (0:00:01.455) 0:01:36.809 ********** 2026-04-05 01:06:04.270765 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:06:04.270775 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:06:04.270784 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:06:04.270794 | orchestrator | 2026-04-05 01:06:04.270803 | orchestrator | TASK [cinder : Check cinder containers] **************************************** 2026-04-05 01:06:04.270813 | orchestrator | Sunday 05 April 2026 01:04:19 +0000 (0:00:00.325) 0:01:37.134 ********** 2026-04-05 01:06:04.270823 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-04-05 01:06:04.270838 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-04-05 01:06:04.270855 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-04-05 01:06:04.270870 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-04-05 01:06:04.270880 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-04-05 01:06:04.270898 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-04-05 01:06:04.270908 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-04-05 01:06:04.270918 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-04-05 01:06:04.270934 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-04-05 01:06:04.270948 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-04-05 01:06:04.270966 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-04-05 01:06:04.270977 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-04-05 01:06:04.270994 | orchestrator | 2026-04-05 01:06:04.271012 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-04-05 01:06:04.271030 | orchestrator | Sunday 05 April 2026 01:04:23 +0000 (0:00:03.721) 0:01:40.855 ********** 2026-04-05 01:06:04.271047 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:06:04.271064 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:06:04.271077 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:06:04.271153 | orchestrator | 2026-04-05 01:06:04.271165 | orchestrator | TASK [cinder : Creating Cinder database] *************************************** 2026-04-05 01:06:04.271179 | orchestrator | Sunday 05 April 2026 01:04:23 +0000 (0:00:00.277) 0:01:41.132 ********** 2026-04-05 01:06:04.271196 | orchestrator | changed: [testbed-node-0] 2026-04-05 01:06:04.271213 | orchestrator | 2026-04-05 01:06:04.271230 | orchestrator | TASK [cinder : Creating Cinder database user and setting permissions] ********** 2026-04-05 01:06:04.271241 | orchestrator | Sunday 05 April 2026 01:04:26 +0000 (0:00:02.672) 0:01:43.805 ********** 2026-04-05 01:06:04.271251 | orchestrator | changed: [testbed-node-0] 2026-04-05 01:06:04.271261 | orchestrator | 2026-04-05 01:06:04.271271 | orchestrator | TASK [cinder : Running Cinder bootstrap container] ***************************** 2026-04-05 01:06:04.271280 | orchestrator | Sunday 05 April 2026 01:04:28 +0000 (0:00:02.856) 0:01:46.661 ********** 2026-04-05 01:06:04.271290 | orchestrator | changed: [testbed-node-0] 2026-04-05 01:06:04.271300 | orchestrator | 2026-04-05 01:06:04.271310 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2026-04-05 01:06:04.271319 | orchestrator | Sunday 05 April 2026 01:04:52 +0000 (0:00:24.093) 0:02:10.754 ********** 2026-04-05 01:06:04.271329 | orchestrator | 2026-04-05 01:06:04.271339 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2026-04-05 01:06:04.271349 | orchestrator | Sunday 05 April 2026 01:04:53 +0000 (0:00:00.061) 0:02:10.816 ********** 2026-04-05 01:06:04.271358 | orchestrator | 2026-04-05 01:06:04.271368 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2026-04-05 01:06:04.271377 | orchestrator | Sunday 05 April 2026 01:04:53 +0000 (0:00:00.061) 0:02:10.877 ********** 2026-04-05 01:06:04.271387 | orchestrator | 2026-04-05 01:06:04.271397 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-api container] ************************ 2026-04-05 01:06:04.271413 | orchestrator | Sunday 05 April 2026 01:04:53 +0000 (0:00:00.060) 0:02:10.938 ********** 2026-04-05 01:06:04.271430 | orchestrator | changed: [testbed-node-0] 2026-04-05 01:06:04.271447 | orchestrator | changed: [testbed-node-1] 2026-04-05 01:06:04.271464 | orchestrator | changed: [testbed-node-2] 2026-04-05 01:06:04.271488 | orchestrator | 2026-04-05 01:06:04.271506 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-scheduler container] ****************** 2026-04-05 01:06:04.271532 | orchestrator | Sunday 05 April 2026 01:05:17 +0000 (0:00:24.269) 0:02:35.207 ********** 2026-04-05 01:06:04.271548 | orchestrator | changed: [testbed-node-0] 2026-04-05 01:06:04.271559 | orchestrator | changed: [testbed-node-1] 2026-04-05 01:06:04.271569 | orchestrator | changed: [testbed-node-2] 2026-04-05 01:06:04.271583 | orchestrator | 2026-04-05 01:06:04.271601 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-volume container] ********************* 2026-04-05 01:06:04.271617 | orchestrator | Sunday 05 April 2026 01:05:29 +0000 (0:00:11.605) 0:02:46.812 ********** 2026-04-05 01:06:04.271635 | orchestrator | changed: [testbed-node-0] 2026-04-05 01:06:04.271646 | orchestrator | changed: [testbed-node-1] 2026-04-05 01:06:04.271656 | orchestrator | changed: [testbed-node-2] 2026-04-05 01:06:04.271669 | orchestrator | 2026-04-05 01:06:04.271687 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-backup container] ********************* 2026-04-05 01:06:04.271704 | orchestrator | Sunday 05 April 2026 01:05:56 +0000 (0:00:27.044) 0:03:13.857 ********** 2026-04-05 01:06:04.271720 | orchestrator | changed: [testbed-node-0] 2026-04-05 01:06:04.271731 | orchestrator | changed: [testbed-node-1] 2026-04-05 01:06:04.271741 | orchestrator | changed: [testbed-node-2] 2026-04-05 01:06:04.271750 | orchestrator | 2026-04-05 01:06:04.271766 | orchestrator | RUNNING HANDLER [cinder : Wait for cinder services to update service versions] *** 2026-04-05 01:06:04.271776 | orchestrator | Sunday 05 April 2026 01:06:02 +0000 (0:00:06.211) 0:03:20.069 ********** 2026-04-05 01:06:04.271786 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:06:04.271795 | orchestrator | 2026-04-05 01:06:04.271805 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-05 01:06:04.271816 | orchestrator | testbed-node-0 : ok=31  changed=23  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-04-05 01:06:04.271826 | orchestrator | testbed-node-1 : ok=22  changed=16  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-04-05 01:06:04.271836 | orchestrator | testbed-node-2 : ok=22  changed=16  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-04-05 01:06:04.271845 | orchestrator | 2026-04-05 01:06:04.271855 | orchestrator | 2026-04-05 01:06:04.271864 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-05 01:06:04.271874 | orchestrator | Sunday 05 April 2026 01:06:02 +0000 (0:00:00.252) 0:03:20.321 ********** 2026-04-05 01:06:04.271884 | orchestrator | =============================================================================== 2026-04-05 01:06:04.271893 | orchestrator | cinder : Restart cinder-volume container ------------------------------- 27.04s 2026-04-05 01:06:04.271903 | orchestrator | cinder : Restart cinder-api container ---------------------------------- 24.27s 2026-04-05 01:06:04.271912 | orchestrator | cinder : Running Cinder bootstrap container ---------------------------- 24.09s 2026-04-05 01:06:04.271922 | orchestrator | cinder : Copying over cinder.conf -------------------------------------- 18.74s 2026-04-05 01:06:04.271931 | orchestrator | cinder : Restart cinder-scheduler container ---------------------------- 11.61s 2026-04-05 01:06:04.271941 | orchestrator | service-ks-register : cinder | Granting user roles ---------------------- 8.65s 2026-04-05 01:06:04.271950 | orchestrator | service-ks-register : cinder | Creating endpoints ----------------------- 7.31s 2026-04-05 01:06:04.271959 | orchestrator | cinder : Restart cinder-backup container -------------------------------- 6.21s 2026-04-05 01:06:04.271969 | orchestrator | cinder : Copying over config.json files for services -------------------- 5.70s 2026-04-05 01:06:04.271979 | orchestrator | service-cert-copy : cinder | Copying over extra CA certificates --------- 5.52s 2026-04-05 01:06:04.271988 | orchestrator | service-ks-register : cinder | Creating services ------------------------ 4.66s 2026-04-05 01:06:04.271998 | orchestrator | cinder : Copying over multiple ceph.conf for cinder services ------------ 4.38s 2026-04-05 01:06:04.272015 | orchestrator | service-ks-register : cinder | Creating users --------------------------- 4.33s 2026-04-05 01:06:04.272024 | orchestrator | service-ks-register : cinder | Creating projects ------------------------ 3.97s 2026-04-05 01:06:04.272034 | orchestrator | cinder : Ensuring config directories exist ------------------------------ 3.81s 2026-04-05 01:06:04.272044 | orchestrator | cinder : Check cinder containers ---------------------------------------- 3.72s 2026-04-05 01:06:04.272053 | orchestrator | service-ks-register : cinder | Creating roles --------------------------- 3.57s 2026-04-05 01:06:04.272063 | orchestrator | cinder : Copying over cinder-wsgi.conf ---------------------------------- 3.17s 2026-04-05 01:06:04.272072 | orchestrator | cinder : Copy over Ceph keyring files for cinder-backup ----------------- 3.16s 2026-04-05 01:06:04.272082 | orchestrator | cinder : Creating Cinder database user and setting permissions ---------- 2.86s 2026-04-05 01:06:04.272125 | orchestrator | 2026-04-05 01:06:04 | INFO  | Task 6e1f3680-63a0-4689-a8e3-fccf945973d0 is in state STARTED 2026-04-05 01:06:04.272137 | orchestrator | 2026-04-05 01:06:04 | INFO  | Task 47948cf2-ea8f-4955-828f-00609beb94e2 is in state STARTED 2026-04-05 01:06:04.272147 | orchestrator | 2026-04-05 01:06:04 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:06:07.303057 | orchestrator | 2026-04-05 01:06:07 | INFO  | Task b5e2ef28-6577-4f62-94bc-3cb1b5248189 is in state STARTED 2026-04-05 01:06:07.303493 | orchestrator | 2026-04-05 01:06:07 | INFO  | Task a08d6d8a-a2e0-443e-ba8e-aa7bb227cfe9 is in state STARTED 2026-04-05 01:06:07.304293 | orchestrator | 2026-04-05 01:06:07 | INFO  | Task 6e1f3680-63a0-4689-a8e3-fccf945973d0 is in state STARTED 2026-04-05 01:06:07.305332 | orchestrator | 2026-04-05 01:06:07 | INFO  | Task 47948cf2-ea8f-4955-828f-00609beb94e2 is in state STARTED 2026-04-05 01:06:07.305423 | orchestrator | 2026-04-05 01:06:07 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:06:10.334412 | orchestrator | 2026-04-05 01:06:10 | INFO  | Task b5e2ef28-6577-4f62-94bc-3cb1b5248189 is in state STARTED 2026-04-05 01:06:10.334576 | orchestrator | 2026-04-05 01:06:10 | INFO  | Task a08d6d8a-a2e0-443e-ba8e-aa7bb227cfe9 is in state STARTED 2026-04-05 01:06:10.335556 | orchestrator | 2026-04-05 01:06:10 | INFO  | Task 6e1f3680-63a0-4689-a8e3-fccf945973d0 is in state STARTED 2026-04-05 01:06:10.336596 | orchestrator | 2026-04-05 01:06:10 | INFO  | Task 47948cf2-ea8f-4955-828f-00609beb94e2 is in state STARTED 2026-04-05 01:06:10.336618 | orchestrator | 2026-04-05 01:06:10 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:06:13.381321 | orchestrator | 2026-04-05 01:06:13 | INFO  | Task b5e2ef28-6577-4f62-94bc-3cb1b5248189 is in state STARTED 2026-04-05 01:06:13.382309 | orchestrator | 2026-04-05 01:06:13 | INFO  | Task a08d6d8a-a2e0-443e-ba8e-aa7bb227cfe9 is in state STARTED 2026-04-05 01:06:13.384054 | orchestrator | 2026-04-05 01:06:13 | INFO  | Task 6e1f3680-63a0-4689-a8e3-fccf945973d0 is in state STARTED 2026-04-05 01:06:13.388029 | orchestrator | 2026-04-05 01:06:13 | INFO  | Task 47948cf2-ea8f-4955-828f-00609beb94e2 is in state STARTED 2026-04-05 01:06:13.388069 | orchestrator | 2026-04-05 01:06:13 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:06:16.442731 | orchestrator | 2026-04-05 01:06:16 | INFO  | Task b5e2ef28-6577-4f62-94bc-3cb1b5248189 is in state STARTED 2026-04-05 01:06:16.443240 | orchestrator | 2026-04-05 01:06:16 | INFO  | Task a08d6d8a-a2e0-443e-ba8e-aa7bb227cfe9 is in state STARTED 2026-04-05 01:06:16.444073 | orchestrator | 2026-04-05 01:06:16 | INFO  | Task 6e1f3680-63a0-4689-a8e3-fccf945973d0 is in state STARTED 2026-04-05 01:06:16.445621 | orchestrator | 2026-04-05 01:06:16 | INFO  | Task 47948cf2-ea8f-4955-828f-00609beb94e2 is in state STARTED 2026-04-05 01:06:16.445674 | orchestrator | 2026-04-05 01:06:16 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:06:19.497635 | orchestrator | 2026-04-05 01:06:19 | INFO  | Task b5e2ef28-6577-4f62-94bc-3cb1b5248189 is in state STARTED 2026-04-05 01:06:19.497703 | orchestrator | 2026-04-05 01:06:19 | INFO  | Task a08d6d8a-a2e0-443e-ba8e-aa7bb227cfe9 is in state STARTED 2026-04-05 01:06:19.498326 | orchestrator | 2026-04-05 01:06:19 | INFO  | Task 6e1f3680-63a0-4689-a8e3-fccf945973d0 is in state STARTED 2026-04-05 01:06:19.498862 | orchestrator | 2026-04-05 01:06:19 | INFO  | Task 47948cf2-ea8f-4955-828f-00609beb94e2 is in state STARTED 2026-04-05 01:06:19.498978 | orchestrator | 2026-04-05 01:06:19 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:06:22.536665 | orchestrator | 2026-04-05 01:06:22 | INFO  | Task b5e2ef28-6577-4f62-94bc-3cb1b5248189 is in state STARTED 2026-04-05 01:06:22.537802 | orchestrator | 2026-04-05 01:06:22 | INFO  | Task a08d6d8a-a2e0-443e-ba8e-aa7bb227cfe9 is in state STARTED 2026-04-05 01:06:22.538633 | orchestrator | 2026-04-05 01:06:22 | INFO  | Task 6e1f3680-63a0-4689-a8e3-fccf945973d0 is in state STARTED 2026-04-05 01:06:22.540779 | orchestrator | 2026-04-05 01:06:22 | INFO  | Task 47948cf2-ea8f-4955-828f-00609beb94e2 is in state STARTED 2026-04-05 01:06:22.540806 | orchestrator | 2026-04-05 01:06:22 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:06:25.583652 | orchestrator | 2026-04-05 01:06:25 | INFO  | Task b5e2ef28-6577-4f62-94bc-3cb1b5248189 is in state STARTED 2026-04-05 01:06:25.584604 | orchestrator | 2026-04-05 01:06:25 | INFO  | Task a08d6d8a-a2e0-443e-ba8e-aa7bb227cfe9 is in state STARTED 2026-04-05 01:06:25.585526 | orchestrator | 2026-04-05 01:06:25 | INFO  | Task 6e1f3680-63a0-4689-a8e3-fccf945973d0 is in state STARTED 2026-04-05 01:06:25.586861 | orchestrator | 2026-04-05 01:06:25 | INFO  | Task 47948cf2-ea8f-4955-828f-00609beb94e2 is in state STARTED 2026-04-05 01:06:25.586888 | orchestrator | 2026-04-05 01:06:25 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:06:28.621208 | orchestrator | 2026-04-05 01:06:28 | INFO  | Task b5e2ef28-6577-4f62-94bc-3cb1b5248189 is in state STARTED 2026-04-05 01:06:28.622988 | orchestrator | 2026-04-05 01:06:28 | INFO  | Task a08d6d8a-a2e0-443e-ba8e-aa7bb227cfe9 is in state STARTED 2026-04-05 01:06:28.623415 | orchestrator | 2026-04-05 01:06:28 | INFO  | Task 6e1f3680-63a0-4689-a8e3-fccf945973d0 is in state STARTED 2026-04-05 01:06:28.625419 | orchestrator | 2026-04-05 01:06:28 | INFO  | Task 47948cf2-ea8f-4955-828f-00609beb94e2 is in state STARTED 2026-04-05 01:06:28.625473 | orchestrator | 2026-04-05 01:06:28 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:06:31.660013 | orchestrator | 2026-04-05 01:06:31 | INFO  | Task b5e2ef28-6577-4f62-94bc-3cb1b5248189 is in state STARTED 2026-04-05 01:06:31.660453 | orchestrator | 2026-04-05 01:06:31 | INFO  | Task a08d6d8a-a2e0-443e-ba8e-aa7bb227cfe9 is in state STARTED 2026-04-05 01:06:31.661397 | orchestrator | 2026-04-05 01:06:31 | INFO  | Task 6e1f3680-63a0-4689-a8e3-fccf945973d0 is in state STARTED 2026-04-05 01:06:31.662172 | orchestrator | 2026-04-05 01:06:31 | INFO  | Task 47948cf2-ea8f-4955-828f-00609beb94e2 is in state STARTED 2026-04-05 01:06:31.662203 | orchestrator | 2026-04-05 01:06:31 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:06:34.730163 | orchestrator | 2026-04-05 01:06:34 | INFO  | Task b5e2ef28-6577-4f62-94bc-3cb1b5248189 is in state STARTED 2026-04-05 01:06:34.730352 | orchestrator | 2026-04-05 01:06:34 | INFO  | Task a08d6d8a-a2e0-443e-ba8e-aa7bb227cfe9 is in state STARTED 2026-04-05 01:06:34.731408 | orchestrator | 2026-04-05 01:06:34 | INFO  | Task 6e1f3680-63a0-4689-a8e3-fccf945973d0 is in state STARTED 2026-04-05 01:06:34.734842 | orchestrator | 2026-04-05 01:06:34 | INFO  | Task 47948cf2-ea8f-4955-828f-00609beb94e2 is in state STARTED 2026-04-05 01:06:34.734881 | orchestrator | 2026-04-05 01:06:34 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:06:37.790584 | orchestrator | 2026-04-05 01:06:37 | INFO  | Task b5e2ef28-6577-4f62-94bc-3cb1b5248189 is in state STARTED 2026-04-05 01:06:37.791000 | orchestrator | 2026-04-05 01:06:37 | INFO  | Task a08d6d8a-a2e0-443e-ba8e-aa7bb227cfe9 is in state STARTED 2026-04-05 01:06:37.792307 | orchestrator | 2026-04-05 01:06:37 | INFO  | Task 6e1f3680-63a0-4689-a8e3-fccf945973d0 is in state STARTED 2026-04-05 01:06:37.794696 | orchestrator | 2026-04-05 01:06:37 | INFO  | Task 47948cf2-ea8f-4955-828f-00609beb94e2 is in state STARTED 2026-04-05 01:06:37.794726 | orchestrator | 2026-04-05 01:06:37 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:06:40.830667 | orchestrator | 2026-04-05 01:06:40 | INFO  | Task b5e2ef28-6577-4f62-94bc-3cb1b5248189 is in state STARTED 2026-04-05 01:06:40.830904 | orchestrator | 2026-04-05 01:06:40 | INFO  | Task a08d6d8a-a2e0-443e-ba8e-aa7bb227cfe9 is in state STARTED 2026-04-05 01:06:40.831768 | orchestrator | 2026-04-05 01:06:40 | INFO  | Task 6e1f3680-63a0-4689-a8e3-fccf945973d0 is in state STARTED 2026-04-05 01:06:40.832675 | orchestrator | 2026-04-05 01:06:40 | INFO  | Task 47948cf2-ea8f-4955-828f-00609beb94e2 is in state STARTED 2026-04-05 01:06:40.832729 | orchestrator | 2026-04-05 01:06:40 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:06:43.876591 | orchestrator | 2026-04-05 01:06:43 | INFO  | Task b5e2ef28-6577-4f62-94bc-3cb1b5248189 is in state STARTED 2026-04-05 01:06:43.876844 | orchestrator | 2026-04-05 01:06:43 | INFO  | Task a08d6d8a-a2e0-443e-ba8e-aa7bb227cfe9 is in state STARTED 2026-04-05 01:06:43.877852 | orchestrator | 2026-04-05 01:06:43 | INFO  | Task 6e1f3680-63a0-4689-a8e3-fccf945973d0 is in state STARTED 2026-04-05 01:06:43.878614 | orchestrator | 2026-04-05 01:06:43 | INFO  | Task 47948cf2-ea8f-4955-828f-00609beb94e2 is in state STARTED 2026-04-05 01:06:43.878665 | orchestrator | 2026-04-05 01:06:43 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:06:46.922627 | orchestrator | 2026-04-05 01:06:46 | INFO  | Task b5e2ef28-6577-4f62-94bc-3cb1b5248189 is in state STARTED 2026-04-05 01:06:46.922823 | orchestrator | 2026-04-05 01:06:46 | INFO  | Task a08d6d8a-a2e0-443e-ba8e-aa7bb227cfe9 is in state STARTED 2026-04-05 01:06:46.924499 | orchestrator | 2026-04-05 01:06:46 | INFO  | Task 6e1f3680-63a0-4689-a8e3-fccf945973d0 is in state STARTED 2026-04-05 01:06:46.925012 | orchestrator | 2026-04-05 01:06:46 | INFO  | Task 47948cf2-ea8f-4955-828f-00609beb94e2 is in state STARTED 2026-04-05 01:06:46.925047 | orchestrator | 2026-04-05 01:06:46 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:06:49.960962 | orchestrator | 2026-04-05 01:06:49 | INFO  | Task b5e2ef28-6577-4f62-94bc-3cb1b5248189 is in state STARTED 2026-04-05 01:06:49.961118 | orchestrator | 2026-04-05 01:06:49 | INFO  | Task a08d6d8a-a2e0-443e-ba8e-aa7bb227cfe9 is in state STARTED 2026-04-05 01:06:49.961820 | orchestrator | 2026-04-05 01:06:49 | INFO  | Task 6e1f3680-63a0-4689-a8e3-fccf945973d0 is in state STARTED 2026-04-05 01:06:49.962361 | orchestrator | 2026-04-05 01:06:49 | INFO  | Task 47948cf2-ea8f-4955-828f-00609beb94e2 is in state STARTED 2026-04-05 01:06:49.962401 | orchestrator | 2026-04-05 01:06:49 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:06:52.988098 | orchestrator | 2026-04-05 01:06:52 | INFO  | Task b5e2ef28-6577-4f62-94bc-3cb1b5248189 is in state STARTED 2026-04-05 01:06:52.991908 | orchestrator | 2026-04-05 01:06:52 | INFO  | Task a08d6d8a-a2e0-443e-ba8e-aa7bb227cfe9 is in state STARTED 2026-04-05 01:06:52.992358 | orchestrator | 2026-04-05 01:06:52 | INFO  | Task 6e1f3680-63a0-4689-a8e3-fccf945973d0 is in state STARTED 2026-04-05 01:06:52.992976 | orchestrator | 2026-04-05 01:06:52 | INFO  | Task 47948cf2-ea8f-4955-828f-00609beb94e2 is in state STARTED 2026-04-05 01:06:52.993021 | orchestrator | 2026-04-05 01:06:52 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:06:56.038909 | orchestrator | 2026-04-05 01:06:56 | INFO  | Task b5e2ef28-6577-4f62-94bc-3cb1b5248189 is in state STARTED 2026-04-05 01:06:56.044344 | orchestrator | 2026-04-05 01:06:56 | INFO  | Task a08d6d8a-a2e0-443e-ba8e-aa7bb227cfe9 is in state STARTED 2026-04-05 01:06:56.047526 | orchestrator | 2026-04-05 01:06:56 | INFO  | Task 6e1f3680-63a0-4689-a8e3-fccf945973d0 is in state STARTED 2026-04-05 01:06:56.048324 | orchestrator | 2026-04-05 01:06:56 | INFO  | Task 47948cf2-ea8f-4955-828f-00609beb94e2 is in state STARTED 2026-04-05 01:06:56.048452 | orchestrator | 2026-04-05 01:06:56 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:06:59.073340 | orchestrator | 2026-04-05 01:06:59 | INFO  | Task b5e2ef28-6577-4f62-94bc-3cb1b5248189 is in state STARTED 2026-04-05 01:06:59.073415 | orchestrator | 2026-04-05 01:06:59 | INFO  | Task a08d6d8a-a2e0-443e-ba8e-aa7bb227cfe9 is in state STARTED 2026-04-05 01:06:59.074117 | orchestrator | 2026-04-05 01:06:59 | INFO  | Task 6e1f3680-63a0-4689-a8e3-fccf945973d0 is in state STARTED 2026-04-05 01:06:59.074766 | orchestrator | 2026-04-05 01:06:59 | INFO  | Task 47948cf2-ea8f-4955-828f-00609beb94e2 is in state STARTED 2026-04-05 01:06:59.074800 | orchestrator | 2026-04-05 01:06:59 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:07:02.102604 | orchestrator | 2026-04-05 01:07:02 | INFO  | Task b5e2ef28-6577-4f62-94bc-3cb1b5248189 is in state STARTED 2026-04-05 01:07:02.104385 | orchestrator | 2026-04-05 01:07:02 | INFO  | Task a08d6d8a-a2e0-443e-ba8e-aa7bb227cfe9 is in state STARTED 2026-04-05 01:07:02.104428 | orchestrator | 2026-04-05 01:07:02 | INFO  | Task 6e1f3680-63a0-4689-a8e3-fccf945973d0 is in state STARTED 2026-04-05 01:07:02.104441 | orchestrator | 2026-04-05 01:07:02 | INFO  | Task 47948cf2-ea8f-4955-828f-00609beb94e2 is in state STARTED 2026-04-05 01:07:02.104452 | orchestrator | 2026-04-05 01:07:02 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:07:05.189019 | orchestrator | 2026-04-05 01:07:05 | INFO  | Task b5e2ef28-6577-4f62-94bc-3cb1b5248189 is in state STARTED 2026-04-05 01:07:05.189108 | orchestrator | 2026-04-05 01:07:05 | INFO  | Task a08d6d8a-a2e0-443e-ba8e-aa7bb227cfe9 is in state STARTED 2026-04-05 01:07:05.189122 | orchestrator | 2026-04-05 01:07:05 | INFO  | Task 6e1f3680-63a0-4689-a8e3-fccf945973d0 is in state STARTED 2026-04-05 01:07:05.189134 | orchestrator | 2026-04-05 01:07:05 | INFO  | Task 47948cf2-ea8f-4955-828f-00609beb94e2 is in state STARTED 2026-04-05 01:07:05.189145 | orchestrator | 2026-04-05 01:07:05 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:07:08.177820 | orchestrator | 2026-04-05 01:07:08 | INFO  | Task b5e2ef28-6577-4f62-94bc-3cb1b5248189 is in state STARTED 2026-04-05 01:07:08.178470 | orchestrator | 2026-04-05 01:07:08 | INFO  | Task a08d6d8a-a2e0-443e-ba8e-aa7bb227cfe9 is in state STARTED 2026-04-05 01:07:08.179590 | orchestrator | 2026-04-05 01:07:08 | INFO  | Task 6e1f3680-63a0-4689-a8e3-fccf945973d0 is in state STARTED 2026-04-05 01:07:08.181447 | orchestrator | 2026-04-05 01:07:08 | INFO  | Task 47948cf2-ea8f-4955-828f-00609beb94e2 is in state STARTED 2026-04-05 01:07:08.181645 | orchestrator | 2026-04-05 01:07:08 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:07:11.230951 | orchestrator | 2026-04-05 01:07:11 | INFO  | Task b5e2ef28-6577-4f62-94bc-3cb1b5248189 is in state STARTED 2026-04-05 01:07:11.232042 | orchestrator | 2026-04-05 01:07:11 | INFO  | Task a08d6d8a-a2e0-443e-ba8e-aa7bb227cfe9 is in state STARTED 2026-04-05 01:07:11.234387 | orchestrator | 2026-04-05 01:07:11 | INFO  | Task 6e1f3680-63a0-4689-a8e3-fccf945973d0 is in state STARTED 2026-04-05 01:07:11.234791 | orchestrator | 2026-04-05 01:07:11 | INFO  | Task 47948cf2-ea8f-4955-828f-00609beb94e2 is in state STARTED 2026-04-05 01:07:11.234830 | orchestrator | 2026-04-05 01:07:11 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:07:14.262486 | orchestrator | 2026-04-05 01:07:14 | INFO  | Task b5e2ef28-6577-4f62-94bc-3cb1b5248189 is in state STARTED 2026-04-05 01:07:14.262774 | orchestrator | 2026-04-05 01:07:14 | INFO  | Task a08d6d8a-a2e0-443e-ba8e-aa7bb227cfe9 is in state STARTED 2026-04-05 01:07:14.265445 | orchestrator | 2026-04-05 01:07:14 | INFO  | Task 6e1f3680-63a0-4689-a8e3-fccf945973d0 is in state STARTED 2026-04-05 01:07:14.265958 | orchestrator | 2026-04-05 01:07:14 | INFO  | Task 47948cf2-ea8f-4955-828f-00609beb94e2 is in state STARTED 2026-04-05 01:07:14.265978 | orchestrator | 2026-04-05 01:07:14 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:07:17.305072 | orchestrator | 2026-04-05 01:07:17 | INFO  | Task b5e2ef28-6577-4f62-94bc-3cb1b5248189 is in state STARTED 2026-04-05 01:07:17.305318 | orchestrator | 2026-04-05 01:07:17 | INFO  | Task a08d6d8a-a2e0-443e-ba8e-aa7bb227cfe9 is in state STARTED 2026-04-05 01:07:17.305486 | orchestrator | 2026-04-05 01:07:17 | INFO  | Task 6e1f3680-63a0-4689-a8e3-fccf945973d0 is in state STARTED 2026-04-05 01:07:17.306359 | orchestrator | 2026-04-05 01:07:17 | INFO  | Task 47948cf2-ea8f-4955-828f-00609beb94e2 is in state STARTED 2026-04-05 01:07:17.306410 | orchestrator | 2026-04-05 01:07:17 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:07:20.335260 | orchestrator | 2026-04-05 01:07:20 | INFO  | Task b5e2ef28-6577-4f62-94bc-3cb1b5248189 is in state STARTED 2026-04-05 01:07:20.335768 | orchestrator | 2026-04-05 01:07:20 | INFO  | Task a08d6d8a-a2e0-443e-ba8e-aa7bb227cfe9 is in state STARTED 2026-04-05 01:07:20.336503 | orchestrator | 2026-04-05 01:07:20 | INFO  | Task 6e1f3680-63a0-4689-a8e3-fccf945973d0 is in state STARTED 2026-04-05 01:07:20.337541 | orchestrator | 2026-04-05 01:07:20 | INFO  | Task 47948cf2-ea8f-4955-828f-00609beb94e2 is in state STARTED 2026-04-05 01:07:20.337596 | orchestrator | 2026-04-05 01:07:20 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:07:23.362912 | orchestrator | 2026-04-05 01:07:23 | INFO  | Task b5e2ef28-6577-4f62-94bc-3cb1b5248189 is in state STARTED 2026-04-05 01:07:23.363165 | orchestrator | 2026-04-05 01:07:23 | INFO  | Task a08d6d8a-a2e0-443e-ba8e-aa7bb227cfe9 is in state STARTED 2026-04-05 01:07:23.364107 | orchestrator | 2026-04-05 01:07:23 | INFO  | Task 6e1f3680-63a0-4689-a8e3-fccf945973d0 is in state STARTED 2026-04-05 01:07:23.364511 | orchestrator | 2026-04-05 01:07:23 | INFO  | Task 47948cf2-ea8f-4955-828f-00609beb94e2 is in state STARTED 2026-04-05 01:07:23.364534 | orchestrator | 2026-04-05 01:07:23 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:07:26.394985 | orchestrator | 2026-04-05 01:07:26 | INFO  | Task b5e2ef28-6577-4f62-94bc-3cb1b5248189 is in state STARTED 2026-04-05 01:07:26.395546 | orchestrator | 2026-04-05 01:07:26 | INFO  | Task a08d6d8a-a2e0-443e-ba8e-aa7bb227cfe9 is in state STARTED 2026-04-05 01:07:26.398136 | orchestrator | 2026-04-05 01:07:26 | INFO  | Task 6e1f3680-63a0-4689-a8e3-fccf945973d0 is in state STARTED 2026-04-05 01:07:26.398585 | orchestrator | 2026-04-05 01:07:26 | INFO  | Task 47948cf2-ea8f-4955-828f-00609beb94e2 is in state STARTED 2026-04-05 01:07:26.398619 | orchestrator | 2026-04-05 01:07:26 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:07:29.449094 | orchestrator | 2026-04-05 01:07:29 | INFO  | Task b5e2ef28-6577-4f62-94bc-3cb1b5248189 is in state STARTED 2026-04-05 01:07:29.450313 | orchestrator | 2026-04-05 01:07:29 | INFO  | Task a08d6d8a-a2e0-443e-ba8e-aa7bb227cfe9 is in state STARTED 2026-04-05 01:07:29.450994 | orchestrator | 2026-04-05 01:07:29 | INFO  | Task 6e1f3680-63a0-4689-a8e3-fccf945973d0 is in state STARTED 2026-04-05 01:07:29.453017 | orchestrator | 2026-04-05 01:07:29 | INFO  | Task 47948cf2-ea8f-4955-828f-00609beb94e2 is in state STARTED 2026-04-05 01:07:29.453090 | orchestrator | 2026-04-05 01:07:29 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:07:32.508215 | orchestrator | 2026-04-05 01:07:32 | INFO  | Task b5e2ef28-6577-4f62-94bc-3cb1b5248189 is in state STARTED 2026-04-05 01:07:32.509310 | orchestrator | 2026-04-05 01:07:32 | INFO  | Task a08d6d8a-a2e0-443e-ba8e-aa7bb227cfe9 is in state STARTED 2026-04-05 01:07:32.510140 | orchestrator | 2026-04-05 01:07:32 | INFO  | Task 6e1f3680-63a0-4689-a8e3-fccf945973d0 is in state STARTED 2026-04-05 01:07:32.511276 | orchestrator | 2026-04-05 01:07:32 | INFO  | Task 47948cf2-ea8f-4955-828f-00609beb94e2 is in state STARTED 2026-04-05 01:07:32.511300 | orchestrator | 2026-04-05 01:07:32 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:07:35.561467 | orchestrator | 2026-04-05 01:07:35 | INFO  | Task b5e2ef28-6577-4f62-94bc-3cb1b5248189 is in state STARTED 2026-04-05 01:07:35.562489 | orchestrator | 2026-04-05 01:07:35 | INFO  | Task a08d6d8a-a2e0-443e-ba8e-aa7bb227cfe9 is in state STARTED 2026-04-05 01:07:35.563371 | orchestrator | 2026-04-05 01:07:35 | INFO  | Task 6e1f3680-63a0-4689-a8e3-fccf945973d0 is in state STARTED 2026-04-05 01:07:35.564340 | orchestrator | 2026-04-05 01:07:35 | INFO  | Task 47948cf2-ea8f-4955-828f-00609beb94e2 is in state STARTED 2026-04-05 01:07:35.564408 | orchestrator | 2026-04-05 01:07:35 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:07:38.591372 | orchestrator | 2026-04-05 01:07:38 | INFO  | Task b5e2ef28-6577-4f62-94bc-3cb1b5248189 is in state STARTED 2026-04-05 01:07:38.591648 | orchestrator | 2026-04-05 01:07:38 | INFO  | Task a08d6d8a-a2e0-443e-ba8e-aa7bb227cfe9 is in state STARTED 2026-04-05 01:07:38.592416 | orchestrator | 2026-04-05 01:07:38 | INFO  | Task 6e1f3680-63a0-4689-a8e3-fccf945973d0 is in state STARTED 2026-04-05 01:07:38.592867 | orchestrator | 2026-04-05 01:07:38 | INFO  | Task 47948cf2-ea8f-4955-828f-00609beb94e2 is in state STARTED 2026-04-05 01:07:38.593115 | orchestrator | 2026-04-05 01:07:38 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:07:41.628543 | orchestrator | 2026-04-05 01:07:41 | INFO  | Task b5e2ef28-6577-4f62-94bc-3cb1b5248189 is in state STARTED 2026-04-05 01:07:41.630623 | orchestrator | 2026-04-05 01:07:41 | INFO  | Task a08d6d8a-a2e0-443e-ba8e-aa7bb227cfe9 is in state STARTED 2026-04-05 01:07:41.632297 | orchestrator | 2026-04-05 01:07:41 | INFO  | Task 6e1f3680-63a0-4689-a8e3-fccf945973d0 is in state STARTED 2026-04-05 01:07:41.634545 | orchestrator | 2026-04-05 01:07:41 | INFO  | Task 47948cf2-ea8f-4955-828f-00609beb94e2 is in state STARTED 2026-04-05 01:07:41.634695 | orchestrator | 2026-04-05 01:07:41 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:07:44.662867 | orchestrator | 2026-04-05 01:07:44 | INFO  | Task b5e2ef28-6577-4f62-94bc-3cb1b5248189 is in state STARTED 2026-04-05 01:07:44.665716 | orchestrator | 2026-04-05 01:07:44 | INFO  | Task a08d6d8a-a2e0-443e-ba8e-aa7bb227cfe9 is in state STARTED 2026-04-05 01:07:44.666893 | orchestrator | 2026-04-05 01:07:44 | INFO  | Task 6e1f3680-63a0-4689-a8e3-fccf945973d0 is in state STARTED 2026-04-05 01:07:44.668014 | orchestrator | 2026-04-05 01:07:44 | INFO  | Task 47948cf2-ea8f-4955-828f-00609beb94e2 is in state STARTED 2026-04-05 01:07:44.668044 | orchestrator | 2026-04-05 01:07:44 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:07:47.699043 | orchestrator | 2026-04-05 01:07:47 | INFO  | Task b5e2ef28-6577-4f62-94bc-3cb1b5248189 is in state STARTED 2026-04-05 01:07:47.700892 | orchestrator | 2026-04-05 01:07:47 | INFO  | Task a08d6d8a-a2e0-443e-ba8e-aa7bb227cfe9 is in state STARTED 2026-04-05 01:07:47.702808 | orchestrator | 2026-04-05 01:07:47 | INFO  | Task 6e1f3680-63a0-4689-a8e3-fccf945973d0 is in state SUCCESS 2026-04-05 01:07:47.702926 | orchestrator | 2026-04-05 01:07:47.705186 | orchestrator | 2026-04-05 01:07:47.705370 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-05 01:07:47.705394 | orchestrator | 2026-04-05 01:07:47.705406 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-05 01:07:47.705418 | orchestrator | Sunday 05 April 2026 01:05:39 +0000 (0:00:00.345) 0:00:00.345 ********** 2026-04-05 01:07:47.705429 | orchestrator | ok: [testbed-node-0] 2026-04-05 01:07:47.705441 | orchestrator | ok: [testbed-node-1] 2026-04-05 01:07:47.705452 | orchestrator | ok: [testbed-node-2] 2026-04-05 01:07:47.705463 | orchestrator | 2026-04-05 01:07:47.705475 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-05 01:07:47.705486 | orchestrator | Sunday 05 April 2026 01:05:39 +0000 (0:00:00.275) 0:00:00.621 ********** 2026-04-05 01:07:47.705497 | orchestrator | ok: [testbed-node-0] => (item=enable_barbican_True) 2026-04-05 01:07:47.705509 | orchestrator | ok: [testbed-node-1] => (item=enable_barbican_True) 2026-04-05 01:07:47.705520 | orchestrator | ok: [testbed-node-2] => (item=enable_barbican_True) 2026-04-05 01:07:47.705531 | orchestrator | 2026-04-05 01:07:47.705542 | orchestrator | PLAY [Apply role barbican] ***************************************************** 2026-04-05 01:07:47.705553 | orchestrator | 2026-04-05 01:07:47.705564 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2026-04-05 01:07:47.705575 | orchestrator | Sunday 05 April 2026 01:05:39 +0000 (0:00:00.284) 0:00:00.905 ********** 2026-04-05 01:07:47.705586 | orchestrator | included: /ansible/roles/barbican/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 01:07:47.705597 | orchestrator | 2026-04-05 01:07:47.705609 | orchestrator | TASK [service-ks-register : barbican | Creating services] ********************** 2026-04-05 01:07:47.705620 | orchestrator | Sunday 05 April 2026 01:05:40 +0000 (0:00:00.668) 0:00:01.574 ********** 2026-04-05 01:07:47.705631 | orchestrator | changed: [testbed-node-0] => (item=barbican (key-manager)) 2026-04-05 01:07:47.705642 | orchestrator | 2026-04-05 01:07:47.705654 | orchestrator | TASK [service-ks-register : barbican | Creating endpoints] ********************* 2026-04-05 01:07:47.705665 | orchestrator | Sunday 05 April 2026 01:05:44 +0000 (0:00:03.956) 0:00:05.530 ********** 2026-04-05 01:07:47.705675 | orchestrator | changed: [testbed-node-0] => (item=barbican -> https://api-int.testbed.osism.xyz:9311 -> internal) 2026-04-05 01:07:47.705686 | orchestrator | changed: [testbed-node-0] => (item=barbican -> https://api.testbed.osism.xyz:9311 -> public) 2026-04-05 01:07:47.705697 | orchestrator | 2026-04-05 01:07:47.705709 | orchestrator | TASK [service-ks-register : barbican | Creating projects] ********************** 2026-04-05 01:07:47.705720 | orchestrator | Sunday 05 April 2026 01:05:51 +0000 (0:00:07.566) 0:00:13.097 ********** 2026-04-05 01:07:47.705731 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-04-05 01:07:47.705742 | orchestrator | 2026-04-05 01:07:47.705753 | orchestrator | TASK [service-ks-register : barbican | Creating users] ************************* 2026-04-05 01:07:47.705764 | orchestrator | Sunday 05 April 2026 01:05:55 +0000 (0:00:03.569) 0:00:16.666 ********** 2026-04-05 01:07:47.705802 | orchestrator | changed: [testbed-node-0] => (item=barbican -> service) 2026-04-05 01:07:47.705814 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-04-05 01:07:47.705825 | orchestrator | 2026-04-05 01:07:47.705836 | orchestrator | TASK [service-ks-register : barbican | Creating roles] ************************* 2026-04-05 01:07:47.705847 | orchestrator | Sunday 05 April 2026 01:05:59 +0000 (0:00:04.494) 0:00:21.160 ********** 2026-04-05 01:07:47.705872 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-04-05 01:07:47.705883 | orchestrator | changed: [testbed-node-0] => (item=key-manager:service-admin) 2026-04-05 01:07:47.705894 | orchestrator | changed: [testbed-node-0] => (item=creator) 2026-04-05 01:07:47.705905 | orchestrator | changed: [testbed-node-0] => (item=observer) 2026-04-05 01:07:47.705915 | orchestrator | changed: [testbed-node-0] => (item=audit) 2026-04-05 01:07:47.705926 | orchestrator | 2026-04-05 01:07:47.705937 | orchestrator | TASK [service-ks-register : barbican | Granting user roles] ******************** 2026-04-05 01:07:47.705948 | orchestrator | Sunday 05 April 2026 01:06:18 +0000 (0:00:18.262) 0:00:39.422 ********** 2026-04-05 01:07:47.705960 | orchestrator | changed: [testbed-node-0] => (item=barbican -> service -> admin) 2026-04-05 01:07:47.705970 | orchestrator | 2026-04-05 01:07:47.705981 | orchestrator | TASK [barbican : Ensuring config directories exist] **************************** 2026-04-05 01:07:47.705992 | orchestrator | Sunday 05 April 2026 01:06:22 +0000 (0:00:04.266) 0:00:43.689 ********** 2026-04-05 01:07:47.706008 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-04-05 01:07:47.706101 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-04-05 01:07:47.706116 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-04-05 01:07:47.706135 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-04-05 01:07:47.706157 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-04-05 01:07:47.706170 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-04-05 01:07:47.706189 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-04-05 01:07:47.706202 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-04-05 01:07:47.706214 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-04-05 01:07:47.706233 | orchestrator | 2026-04-05 01:07:47.706309 | orchestrator | TASK [barbican : Ensuring vassals config directories exist] ******************** 2026-04-05 01:07:47.706323 | orchestrator | Sunday 05 April 2026 01:06:24 +0000 (0:00:02.242) 0:00:45.932 ********** 2026-04-05 01:07:47.706334 | orchestrator | changed: [testbed-node-0] => (item=barbican-api/vassals) 2026-04-05 01:07:47.706345 | orchestrator | changed: [testbed-node-1] => (item=barbican-api/vassals) 2026-04-05 01:07:47.706355 | orchestrator | changed: [testbed-node-2] => (item=barbican-api/vassals) 2026-04-05 01:07:47.706366 | orchestrator | 2026-04-05 01:07:47.706377 | orchestrator | TASK [barbican : Check if policies shall be overwritten] *********************** 2026-04-05 01:07:47.706388 | orchestrator | Sunday 05 April 2026 01:06:26 +0000 (0:00:01.419) 0:00:47.351 ********** 2026-04-05 01:07:47.706398 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:07:47.706409 | orchestrator | 2026-04-05 01:07:47.706420 | orchestrator | TASK [barbican : Set barbican policy file] ************************************* 2026-04-05 01:07:47.706431 | orchestrator | Sunday 05 April 2026 01:06:26 +0000 (0:00:00.135) 0:00:47.487 ********** 2026-04-05 01:07:47.706441 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:07:47.706452 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:07:47.706463 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:07:47.706474 | orchestrator | 2026-04-05 01:07:47.706485 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2026-04-05 01:07:47.706541 | orchestrator | Sunday 05 April 2026 01:06:26 +0000 (0:00:00.282) 0:00:47.769 ********** 2026-04-05 01:07:47.706559 | orchestrator | included: /ansible/roles/barbican/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 01:07:47.706620 | orchestrator | 2026-04-05 01:07:47.706632 | orchestrator | TASK [service-cert-copy : barbican | Copying over extra CA certificates] ******* 2026-04-05 01:07:47.706643 | orchestrator | Sunday 05 April 2026 01:06:27 +0000 (0:00:01.108) 0:00:48.878 ********** 2026-04-05 01:07:47.706655 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-04-05 01:07:47.706676 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-04-05 01:07:47.706688 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-04-05 01:07:47.706709 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-04-05 01:07:47.706728 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-04-05 01:07:47.706740 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-04-05 01:07:47.706751 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-04-05 01:07:47.706772 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-04-05 01:07:47.706784 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-04-05 01:07:47.706804 | orchestrator | 2026-04-05 01:07:47.706815 | orchestrator | TASK [service-cert-copy : barbican | Copying over backend internal TLS certificate] *** 2026-04-05 01:07:47.706826 | orchestrator | Sunday 05 April 2026 01:06:31 +0000 (0:00:03.944) 0:00:52.822 ********** 2026-04-05 01:07:47.706837 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-04-05 01:07:47.706853 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-04-05 01:07:47.706905 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-04-05 01:07:47.706935 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:07:47.706955 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-04-05 01:07:47.706967 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-04-05 01:07:47.706987 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-04-05 01:07:47.706998 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:07:47.707010 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-04-05 01:07:47.707027 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-04-05 01:07:47.707039 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-04-05 01:07:47.707050 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:07:47.707061 | orchestrator | 2026-04-05 01:07:47.707072 | orchestrator | TASK [service-cert-copy : barbican | Copying over backend internal TLS key] **** 2026-04-05 01:07:47.707083 | orchestrator | Sunday 05 April 2026 01:06:32 +0000 (0:00:00.692) 0:00:53.515 ********** 2026-04-05 01:07:47.707101 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-04-05 01:07:47.707130 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-04-05 01:07:47.707142 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-04-05 01:07:47.707153 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:07:47.707169 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-04-05 01:07:47.707181 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-04-05 01:07:47.707193 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-04-05 01:07:47.707216 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:07:47.707236 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-04-05 01:07:47.707310 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-04-05 01:07:47.707324 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-04-05 01:07:47.707335 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:07:47.707346 | orchestrator | 2026-04-05 01:07:47.707357 | orchestrator | TASK [barbican : Copying over config.json files for services] ****************** 2026-04-05 01:07:47.707368 | orchestrator | Sunday 05 April 2026 01:06:33 +0000 (0:00:00.851) 0:00:54.366 ********** 2026-04-05 01:07:47.707386 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-04-05 01:07:47.707405 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-04-05 01:07:47.707427 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-04-05 01:07:47.707439 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-04-05 01:07:47.707450 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-04-05 01:07:47.707467 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-04-05 01:07:47.707478 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-04-05 01:07:47.707503 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-04-05 01:07:47.707515 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-04-05 01:07:47.707526 | orchestrator | 2026-04-05 01:07:47.707537 | orchestrator | TASK [barbican : Copying over barbican-api.ini] ******************************** 2026-04-05 01:07:47.707548 | orchestrator | Sunday 05 April 2026 01:06:38 +0000 (0:00:05.017) 0:00:59.384 ********** 2026-04-05 01:07:47.707559 | orchestrator | changed: [testbed-node-1] 2026-04-05 01:07:47.707570 | orchestrator | changed: [testbed-node-0] 2026-04-05 01:07:47.707587 | orchestrator | changed: [testbed-node-2] 2026-04-05 01:07:47.707603 | orchestrator | 2026-04-05 01:07:47.707614 | orchestrator | TASK [barbican : Checking whether barbican-api-paste.ini file exists] ********** 2026-04-05 01:07:47.707625 | orchestrator | Sunday 05 April 2026 01:06:40 +0000 (0:00:02.386) 0:01:01.770 ********** 2026-04-05 01:07:47.707634 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-05 01:07:47.707644 | orchestrator | 2026-04-05 01:07:47.707653 | orchestrator | TASK [barbican : Copying over barbican-api-paste.ini] ************************** 2026-04-05 01:07:47.707663 | orchestrator | Sunday 05 April 2026 01:06:42 +0000 (0:00:01.753) 0:01:03.524 ********** 2026-04-05 01:07:47.707672 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:07:47.707682 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:07:47.707691 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:07:47.707701 | orchestrator | 2026-04-05 01:07:47.707711 | orchestrator | TASK [barbican : Copying over barbican.conf] *********************************** 2026-04-05 01:07:47.707720 | orchestrator | Sunday 05 April 2026 01:06:43 +0000 (0:00:00.883) 0:01:04.407 ********** 2026-04-05 01:07:47.707735 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-04-05 01:07:47.707745 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-04-05 01:07:47.707768 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-04-05 01:07:47.707780 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-04-05 01:07:47.707790 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-04-05 01:07:47.707800 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-04-05 01:07:47.707814 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-04-05 01:07:47.707848 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-04-05 01:07:47.707858 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-04-05 01:07:47.707868 | orchestrator | 2026-04-05 01:07:47.707878 | orchestrator | TASK [barbican : Copying over existing policy file] **************************** 2026-04-05 01:07:47.707888 | orchestrator | Sunday 05 April 2026 01:06:55 +0000 (0:00:11.769) 0:01:16.177 ********** 2026-04-05 01:07:47.707905 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-04-05 01:07:47.707916 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-04-05 01:07:47.707926 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-04-05 01:07:47.707936 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:07:47.707957 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-04-05 01:07:47.707967 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-04-05 01:07:47.707983 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-04-05 01:07:47.707993 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:07:47.708003 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-04-05 01:07:47.708013 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-04-05 01:07:47.708028 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-04-05 01:07:47.708044 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:07:47.708054 | orchestrator | 2026-04-05 01:07:47.708064 | orchestrator | TASK [barbican : Check barbican containers] ************************************ 2026-04-05 01:07:47.708073 | orchestrator | Sunday 05 April 2026 01:06:56 +0000 (0:00:01.777) 0:01:17.955 ********** 2026-04-05 01:07:47.708083 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-04-05 01:07:47.708100 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-04-05 01:07:47.708110 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-04-05 01:07:47.708121 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-04-05 01:07:47.708141 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-04-05 01:07:47.708151 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-04-05 01:07:47.708161 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-04-05 01:07:47.708178 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-04-05 01:07:47.708189 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-04-05 01:07:47.708199 | orchestrator | 2026-04-05 01:07:47.708209 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2026-04-05 01:07:47.708218 | orchestrator | Sunday 05 April 2026 01:07:00 +0000 (0:00:03.720) 0:01:21.675 ********** 2026-04-05 01:07:47.708228 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:07:47.708238 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:07:47.708270 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:07:47.708280 | orchestrator | 2026-04-05 01:07:47.708290 | orchestrator | TASK [barbican : Creating barbican database] *********************************** 2026-04-05 01:07:47.708299 | orchestrator | Sunday 05 April 2026 01:07:00 +0000 (0:00:00.453) 0:01:22.129 ********** 2026-04-05 01:07:47.708318 | orchestrator | changed: [testbed-node-0] 2026-04-05 01:07:47.708328 | orchestrator | 2026-04-05 01:07:47.708338 | orchestrator | TASK [barbican : Creating barbican database user and setting permissions] ****** 2026-04-05 01:07:47.708347 | orchestrator | Sunday 05 April 2026 01:07:03 +0000 (0:00:02.462) 0:01:24.591 ********** 2026-04-05 01:07:47.708357 | orchestrator | changed: [testbed-node-0] 2026-04-05 01:07:47.708366 | orchestrator | 2026-04-05 01:07:47.708376 | orchestrator | TASK [barbican : Running barbican bootstrap container] ************************* 2026-04-05 01:07:47.708385 | orchestrator | Sunday 05 April 2026 01:07:05 +0000 (0:00:02.511) 0:01:27.103 ********** 2026-04-05 01:07:47.708395 | orchestrator | changed: [testbed-node-0] 2026-04-05 01:07:47.708404 | orchestrator | 2026-04-05 01:07:47.708414 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2026-04-05 01:07:47.708423 | orchestrator | Sunday 05 April 2026 01:07:18 +0000 (0:00:12.869) 0:01:39.973 ********** 2026-04-05 01:07:47.708433 | orchestrator | 2026-04-05 01:07:47.708443 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2026-04-05 01:07:47.708452 | orchestrator | Sunday 05 April 2026 01:07:18 +0000 (0:00:00.186) 0:01:40.159 ********** 2026-04-05 01:07:47.708462 | orchestrator | 2026-04-05 01:07:47.708471 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2026-04-05 01:07:47.708486 | orchestrator | Sunday 05 April 2026 01:07:19 +0000 (0:00:00.060) 0:01:40.219 ********** 2026-04-05 01:07:47.708495 | orchestrator | 2026-04-05 01:07:47.708505 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-api container] ******************** 2026-04-05 01:07:47.708515 | orchestrator | Sunday 05 April 2026 01:07:19 +0000 (0:00:00.066) 0:01:40.286 ********** 2026-04-05 01:07:47.708524 | orchestrator | changed: [testbed-node-2] 2026-04-05 01:07:47.708534 | orchestrator | changed: [testbed-node-0] 2026-04-05 01:07:47.708543 | orchestrator | changed: [testbed-node-1] 2026-04-05 01:07:47.708553 | orchestrator | 2026-04-05 01:07:47.708562 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-keystone-listener container] ****** 2026-04-05 01:07:47.708572 | orchestrator | Sunday 05 April 2026 01:07:32 +0000 (0:00:13.029) 0:01:53.316 ********** 2026-04-05 01:07:47.708581 | orchestrator | changed: [testbed-node-0] 2026-04-05 01:07:47.708591 | orchestrator | changed: [testbed-node-1] 2026-04-05 01:07:47.708600 | orchestrator | changed: [testbed-node-2] 2026-04-05 01:07:47.708610 | orchestrator | 2026-04-05 01:07:47.708620 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-worker container] ***************** 2026-04-05 01:07:47.708629 | orchestrator | Sunday 05 April 2026 01:07:39 +0000 (0:00:06.984) 0:02:00.300 ********** 2026-04-05 01:07:47.708639 | orchestrator | changed: [testbed-node-0] 2026-04-05 01:07:47.708648 | orchestrator | changed: [testbed-node-1] 2026-04-05 01:07:47.708658 | orchestrator | changed: [testbed-node-2] 2026-04-05 01:07:47.708667 | orchestrator | 2026-04-05 01:07:47.708677 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-05 01:07:47.708687 | orchestrator | testbed-node-0 : ok=24  changed=18  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-04-05 01:07:47.708698 | orchestrator | testbed-node-1 : ok=14  changed=10  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-04-05 01:07:47.708708 | orchestrator | testbed-node-2 : ok=14  changed=10  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-04-05 01:07:47.708717 | orchestrator | 2026-04-05 01:07:47.708727 | orchestrator | 2026-04-05 01:07:47.708737 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-05 01:07:47.708746 | orchestrator | Sunday 05 April 2026 01:07:45 +0000 (0:00:06.344) 0:02:06.644 ********** 2026-04-05 01:07:47.708756 | orchestrator | =============================================================================== 2026-04-05 01:07:47.708765 | orchestrator | service-ks-register : barbican | Creating roles ------------------------ 18.26s 2026-04-05 01:07:47.708780 | orchestrator | barbican : Restart barbican-api container ------------------------------ 13.03s 2026-04-05 01:07:47.708796 | orchestrator | barbican : Running barbican bootstrap container ------------------------ 12.87s 2026-04-05 01:07:47.708806 | orchestrator | barbican : Copying over barbican.conf ---------------------------------- 11.77s 2026-04-05 01:07:47.708816 | orchestrator | service-ks-register : barbican | Creating endpoints --------------------- 7.57s 2026-04-05 01:07:47.708826 | orchestrator | barbican : Restart barbican-keystone-listener container ----------------- 6.98s 2026-04-05 01:07:47.708835 | orchestrator | barbican : Restart barbican-worker container ---------------------------- 6.34s 2026-04-05 01:07:47.708845 | orchestrator | barbican : Copying over config.json files for services ------------------ 5.02s 2026-04-05 01:07:47.708855 | orchestrator | service-ks-register : barbican | Creating users ------------------------- 4.49s 2026-04-05 01:07:47.708864 | orchestrator | service-ks-register : barbican | Granting user roles -------------------- 4.27s 2026-04-05 01:07:47.708874 | orchestrator | service-ks-register : barbican | Creating services ---------------------- 3.96s 2026-04-05 01:07:47.708883 | orchestrator | service-cert-copy : barbican | Copying over extra CA certificates ------- 3.94s 2026-04-05 01:07:47.708893 | orchestrator | barbican : Check barbican containers ------------------------------------ 3.72s 2026-04-05 01:07:47.708902 | orchestrator | service-ks-register : barbican | Creating projects ---------------------- 3.57s 2026-04-05 01:07:47.708916 | orchestrator | barbican : Creating barbican database user and setting permissions ------ 2.51s 2026-04-05 01:07:47.708933 | orchestrator | barbican : Creating barbican database ----------------------------------- 2.46s 2026-04-05 01:07:47.708952 | orchestrator | barbican : Copying over barbican-api.ini -------------------------------- 2.39s 2026-04-05 01:07:47.708976 | orchestrator | barbican : Ensuring config directories exist ---------------------------- 2.24s 2026-04-05 01:07:47.708992 | orchestrator | barbican : Copying over existing policy file ---------------------------- 1.78s 2026-04-05 01:07:47.709007 | orchestrator | barbican : Checking whether barbican-api-paste.ini file exists ---------- 1.75s 2026-04-05 01:07:47.709023 | orchestrator | 2026-04-05 01:07:47 | INFO  | Task 47948cf2-ea8f-4955-828f-00609beb94e2 is in state STARTED 2026-04-05 01:07:47.709197 | orchestrator | 2026-04-05 01:07:47 | INFO  | Task 14c3e237-9981-441e-9514-ac9399fea0b7 is in state STARTED 2026-04-05 01:07:47.709217 | orchestrator | 2026-04-05 01:07:47 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:07:50.731142 | orchestrator | 2026-04-05 01:07:50 | INFO  | Task b5e2ef28-6577-4f62-94bc-3cb1b5248189 is in state STARTED 2026-04-05 01:07:50.733844 | orchestrator | 2026-04-05 01:07:50 | INFO  | Task a08d6d8a-a2e0-443e-ba8e-aa7bb227cfe9 is in state STARTED 2026-04-05 01:07:50.734515 | orchestrator | 2026-04-05 01:07:50 | INFO  | Task 47948cf2-ea8f-4955-828f-00609beb94e2 is in state STARTED 2026-04-05 01:07:50.735065 | orchestrator | 2026-04-05 01:07:50 | INFO  | Task 14c3e237-9981-441e-9514-ac9399fea0b7 is in state STARTED 2026-04-05 01:07:50.735286 | orchestrator | 2026-04-05 01:07:50 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:07:53.756696 | orchestrator | 2026-04-05 01:07:53 | INFO  | Task b5e2ef28-6577-4f62-94bc-3cb1b5248189 is in state STARTED 2026-04-05 01:07:53.757117 | orchestrator | 2026-04-05 01:07:53 | INFO  | Task a08d6d8a-a2e0-443e-ba8e-aa7bb227cfe9 is in state STARTED 2026-04-05 01:07:53.758096 | orchestrator | 2026-04-05 01:07:53 | INFO  | Task 47948cf2-ea8f-4955-828f-00609beb94e2 is in state STARTED 2026-04-05 01:07:53.758629 | orchestrator | 2026-04-05 01:07:53 | INFO  | Task 14c3e237-9981-441e-9514-ac9399fea0b7 is in state STARTED 2026-04-05 01:07:53.758741 | orchestrator | 2026-04-05 01:07:53 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:07:56.799812 | orchestrator | 2026-04-05 01:07:56 | INFO  | Task b5e2ef28-6577-4f62-94bc-3cb1b5248189 is in state STARTED 2026-04-05 01:07:56.802658 | orchestrator | 2026-04-05 01:07:56 | INFO  | Task a08d6d8a-a2e0-443e-ba8e-aa7bb227cfe9 is in state STARTED 2026-04-05 01:07:56.804781 | orchestrator | 2026-04-05 01:07:56 | INFO  | Task 47948cf2-ea8f-4955-828f-00609beb94e2 is in state STARTED 2026-04-05 01:07:56.807392 | orchestrator | 2026-04-05 01:07:56 | INFO  | Task 14c3e237-9981-441e-9514-ac9399fea0b7 is in state STARTED 2026-04-05 01:07:56.807456 | orchestrator | 2026-04-05 01:07:56 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:07:59.847229 | orchestrator | 2026-04-05 01:07:59 | INFO  | Task b5e2ef28-6577-4f62-94bc-3cb1b5248189 is in state STARTED 2026-04-05 01:07:59.849372 | orchestrator | 2026-04-05 01:07:59 | INFO  | Task a08d6d8a-a2e0-443e-ba8e-aa7bb227cfe9 is in state STARTED 2026-04-05 01:07:59.851646 | orchestrator | 2026-04-05 01:07:59 | INFO  | Task 47948cf2-ea8f-4955-828f-00609beb94e2 is in state STARTED 2026-04-05 01:07:59.853449 | orchestrator | 2026-04-05 01:07:59 | INFO  | Task 14c3e237-9981-441e-9514-ac9399fea0b7 is in state STARTED 2026-04-05 01:07:59.853480 | orchestrator | 2026-04-05 01:07:59 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:08:02.927588 | orchestrator | 2026-04-05 01:08:02 | INFO  | Task b5e2ef28-6577-4f62-94bc-3cb1b5248189 is in state STARTED 2026-04-05 01:08:02.929148 | orchestrator | 2026-04-05 01:08:02 | INFO  | Task a08d6d8a-a2e0-443e-ba8e-aa7bb227cfe9 is in state STARTED 2026-04-05 01:08:02.930962 | orchestrator | 2026-04-05 01:08:02 | INFO  | Task 47948cf2-ea8f-4955-828f-00609beb94e2 is in state STARTED 2026-04-05 01:08:02.932340 | orchestrator | 2026-04-05 01:08:02 | INFO  | Task 14c3e237-9981-441e-9514-ac9399fea0b7 is in state STARTED 2026-04-05 01:08:02.932384 | orchestrator | 2026-04-05 01:08:02 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:08:05.985333 | orchestrator | 2026-04-05 01:08:05 | INFO  | Task b5e2ef28-6577-4f62-94bc-3cb1b5248189 is in state STARTED 2026-04-05 01:08:05.986228 | orchestrator | 2026-04-05 01:08:05 | INFO  | Task a08d6d8a-a2e0-443e-ba8e-aa7bb227cfe9 is in state STARTED 2026-04-05 01:08:05.987489 | orchestrator | 2026-04-05 01:08:05 | INFO  | Task 47948cf2-ea8f-4955-828f-00609beb94e2 is in state STARTED 2026-04-05 01:08:05.988240 | orchestrator | 2026-04-05 01:08:05 | INFO  | Task 14c3e237-9981-441e-9514-ac9399fea0b7 is in state STARTED 2026-04-05 01:08:05.988322 | orchestrator | 2026-04-05 01:08:05 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:08:09.049679 | orchestrator | 2026-04-05 01:08:09 | INFO  | Task b5e2ef28-6577-4f62-94bc-3cb1b5248189 is in state STARTED 2026-04-05 01:08:09.052806 | orchestrator | 2026-04-05 01:08:09 | INFO  | Task a08d6d8a-a2e0-443e-ba8e-aa7bb227cfe9 is in state STARTED 2026-04-05 01:08:09.055672 | orchestrator | 2026-04-05 01:08:09 | INFO  | Task 47948cf2-ea8f-4955-828f-00609beb94e2 is in state STARTED 2026-04-05 01:08:09.057905 | orchestrator | 2026-04-05 01:08:09 | INFO  | Task 14c3e237-9981-441e-9514-ac9399fea0b7 is in state STARTED 2026-04-05 01:08:09.057958 | orchestrator | 2026-04-05 01:08:09 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:08:12.100499 | orchestrator | 2026-04-05 01:08:12 | INFO  | Task b5e2ef28-6577-4f62-94bc-3cb1b5248189 is in state STARTED 2026-04-05 01:08:12.100898 | orchestrator | 2026-04-05 01:08:12 | INFO  | Task a08d6d8a-a2e0-443e-ba8e-aa7bb227cfe9 is in state STARTED 2026-04-05 01:08:12.101859 | orchestrator | 2026-04-05 01:08:12 | INFO  | Task 47948cf2-ea8f-4955-828f-00609beb94e2 is in state STARTED 2026-04-05 01:08:12.102831 | orchestrator | 2026-04-05 01:08:12 | INFO  | Task 14c3e237-9981-441e-9514-ac9399fea0b7 is in state STARTED 2026-04-05 01:08:12.102858 | orchestrator | 2026-04-05 01:08:12 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:08:15.141412 | orchestrator | 2026-04-05 01:08:15 | INFO  | Task b5e2ef28-6577-4f62-94bc-3cb1b5248189 is in state STARTED 2026-04-05 01:08:15.143471 | orchestrator | 2026-04-05 01:08:15 | INFO  | Task a08d6d8a-a2e0-443e-ba8e-aa7bb227cfe9 is in state STARTED 2026-04-05 01:08:15.148237 | orchestrator | 2026-04-05 01:08:15 | INFO  | Task 47948cf2-ea8f-4955-828f-00609beb94e2 is in state STARTED 2026-04-05 01:08:15.152220 | orchestrator | 2026-04-05 01:08:15 | INFO  | Task 14c3e237-9981-441e-9514-ac9399fea0b7 is in state STARTED 2026-04-05 01:08:15.152260 | orchestrator | 2026-04-05 01:08:15 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:08:18.174608 | orchestrator | 2026-04-05 01:08:18 | INFO  | Task b5e2ef28-6577-4f62-94bc-3cb1b5248189 is in state STARTED 2026-04-05 01:08:18.175436 | orchestrator | 2026-04-05 01:08:18 | INFO  | Task a08d6d8a-a2e0-443e-ba8e-aa7bb227cfe9 is in state STARTED 2026-04-05 01:08:18.175958 | orchestrator | 2026-04-05 01:08:18 | INFO  | Task 47948cf2-ea8f-4955-828f-00609beb94e2 is in state STARTED 2026-04-05 01:08:18.176664 | orchestrator | 2026-04-05 01:08:18 | INFO  | Task 14c3e237-9981-441e-9514-ac9399fea0b7 is in state STARTED 2026-04-05 01:08:18.176683 | orchestrator | 2026-04-05 01:08:18 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:08:21.216340 | orchestrator | 2026-04-05 01:08:21 | INFO  | Task b5e2ef28-6577-4f62-94bc-3cb1b5248189 is in state STARTED 2026-04-05 01:08:21.216555 | orchestrator | 2026-04-05 01:08:21 | INFO  | Task a08d6d8a-a2e0-443e-ba8e-aa7bb227cfe9 is in state STARTED 2026-04-05 01:08:21.217368 | orchestrator | 2026-04-05 01:08:21 | INFO  | Task 47948cf2-ea8f-4955-828f-00609beb94e2 is in state STARTED 2026-04-05 01:08:21.218813 | orchestrator | 2026-04-05 01:08:21 | INFO  | Task 14c3e237-9981-441e-9514-ac9399fea0b7 is in state STARTED 2026-04-05 01:08:21.218854 | orchestrator | 2026-04-05 01:08:21 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:08:24.259708 | orchestrator | 2026-04-05 01:08:24 | INFO  | Task b5e2ef28-6577-4f62-94bc-3cb1b5248189 is in state STARTED 2026-04-05 01:08:24.260382 | orchestrator | 2026-04-05 01:08:24 | INFO  | Task a08d6d8a-a2e0-443e-ba8e-aa7bb227cfe9 is in state STARTED 2026-04-05 01:08:24.263230 | orchestrator | 2026-04-05 01:08:24 | INFO  | Task 47948cf2-ea8f-4955-828f-00609beb94e2 is in state STARTED 2026-04-05 01:08:24.263925 | orchestrator | 2026-04-05 01:08:24 | INFO  | Task 14c3e237-9981-441e-9514-ac9399fea0b7 is in state STARTED 2026-04-05 01:08:24.263936 | orchestrator | 2026-04-05 01:08:24 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:08:27.308624 | orchestrator | 2026-04-05 01:08:27 | INFO  | Task b5e2ef28-6577-4f62-94bc-3cb1b5248189 is in state STARTED 2026-04-05 01:08:27.309660 | orchestrator | 2026-04-05 01:08:27 | INFO  | Task a08d6d8a-a2e0-443e-ba8e-aa7bb227cfe9 is in state STARTED 2026-04-05 01:08:27.313518 | orchestrator | 2026-04-05 01:08:27 | INFO  | Task 47948cf2-ea8f-4955-828f-00609beb94e2 is in state STARTED 2026-04-05 01:08:27.316062 | orchestrator | 2026-04-05 01:08:27 | INFO  | Task 14c3e237-9981-441e-9514-ac9399fea0b7 is in state STARTED 2026-04-05 01:08:27.316540 | orchestrator | 2026-04-05 01:08:27 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:08:30.355059 | orchestrator | 2026-04-05 01:08:30 | INFO  | Task b5e2ef28-6577-4f62-94bc-3cb1b5248189 is in state STARTED 2026-04-05 01:08:30.357583 | orchestrator | 2026-04-05 01:08:30 | INFO  | Task a08d6d8a-a2e0-443e-ba8e-aa7bb227cfe9 is in state STARTED 2026-04-05 01:08:30.358598 | orchestrator | 2026-04-05 01:08:30 | INFO  | Task 47948cf2-ea8f-4955-828f-00609beb94e2 is in state STARTED 2026-04-05 01:08:30.359648 | orchestrator | 2026-04-05 01:08:30 | INFO  | Task 14c3e237-9981-441e-9514-ac9399fea0b7 is in state STARTED 2026-04-05 01:08:30.359737 | orchestrator | 2026-04-05 01:08:30 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:08:33.446155 | orchestrator | 2026-04-05 01:08:33 | INFO  | Task b5e2ef28-6577-4f62-94bc-3cb1b5248189 is in state STARTED 2026-04-05 01:08:33.448854 | orchestrator | 2026-04-05 01:08:33 | INFO  | Task a08d6d8a-a2e0-443e-ba8e-aa7bb227cfe9 is in state STARTED 2026-04-05 01:08:33.450966 | orchestrator | 2026-04-05 01:08:33 | INFO  | Task 47948cf2-ea8f-4955-828f-00609beb94e2 is in state STARTED 2026-04-05 01:08:33.452435 | orchestrator | 2026-04-05 01:08:33 | INFO  | Task 14c3e237-9981-441e-9514-ac9399fea0b7 is in state STARTED 2026-04-05 01:08:33.452479 | orchestrator | 2026-04-05 01:08:33 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:08:36.493669 | orchestrator | 2026-04-05 01:08:36 | INFO  | Task b5e2ef28-6577-4f62-94bc-3cb1b5248189 is in state STARTED 2026-04-05 01:08:36.495824 | orchestrator | 2026-04-05 01:08:36 | INFO  | Task a08d6d8a-a2e0-443e-ba8e-aa7bb227cfe9 is in state STARTED 2026-04-05 01:08:36.496428 | orchestrator | 2026-04-05 01:08:36 | INFO  | Task 47948cf2-ea8f-4955-828f-00609beb94e2 is in state STARTED 2026-04-05 01:08:36.499629 | orchestrator | 2026-04-05 01:08:36 | INFO  | Task 14c3e237-9981-441e-9514-ac9399fea0b7 is in state STARTED 2026-04-05 01:08:36.499681 | orchestrator | 2026-04-05 01:08:36 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:08:39.601445 | orchestrator | 2026-04-05 01:08:39 | INFO  | Task b5e2ef28-6577-4f62-94bc-3cb1b5248189 is in state STARTED 2026-04-05 01:08:39.602933 | orchestrator | 2026-04-05 01:08:39 | INFO  | Task a08d6d8a-a2e0-443e-ba8e-aa7bb227cfe9 is in state STARTED 2026-04-05 01:08:39.604022 | orchestrator | 2026-04-05 01:08:39 | INFO  | Task 47948cf2-ea8f-4955-828f-00609beb94e2 is in state STARTED 2026-04-05 01:08:39.605845 | orchestrator | 2026-04-05 01:08:39 | INFO  | Task 14c3e237-9981-441e-9514-ac9399fea0b7 is in state STARTED 2026-04-05 01:08:39.606370 | orchestrator | 2026-04-05 01:08:39 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:08:42.642899 | orchestrator | 2026-04-05 01:08:42 | INFO  | Task b5e2ef28-6577-4f62-94bc-3cb1b5248189 is in state STARTED 2026-04-05 01:08:42.643152 | orchestrator | 2026-04-05 01:08:42 | INFO  | Task a08d6d8a-a2e0-443e-ba8e-aa7bb227cfe9 is in state STARTED 2026-04-05 01:08:42.644634 | orchestrator | 2026-04-05 01:08:42 | INFO  | Task 47948cf2-ea8f-4955-828f-00609beb94e2 is in state STARTED 2026-04-05 01:08:42.665506 | orchestrator | 2026-04-05 01:08:42 | INFO  | Task 14c3e237-9981-441e-9514-ac9399fea0b7 is in state STARTED 2026-04-05 01:08:42.665591 | orchestrator | 2026-04-05 01:08:42 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:08:45.741293 | orchestrator | 2026-04-05 01:08:45 | INFO  | Task b5e2ef28-6577-4f62-94bc-3cb1b5248189 is in state STARTED 2026-04-05 01:08:45.743593 | orchestrator | 2026-04-05 01:08:45 | INFO  | Task a08d6d8a-a2e0-443e-ba8e-aa7bb227cfe9 is in state STARTED 2026-04-05 01:08:45.744852 | orchestrator | 2026-04-05 01:08:45 | INFO  | Task 47948cf2-ea8f-4955-828f-00609beb94e2 is in state STARTED 2026-04-05 01:08:45.748551 | orchestrator | 2026-04-05 01:08:45 | INFO  | Task 14c3e237-9981-441e-9514-ac9399fea0b7 is in state STARTED 2026-04-05 01:08:45.748660 | orchestrator | 2026-04-05 01:08:45 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:08:48.795542 | orchestrator | 2026-04-05 01:08:48 | INFO  | Task b5e2ef28-6577-4f62-94bc-3cb1b5248189 is in state STARTED 2026-04-05 01:08:48.796244 | orchestrator | 2026-04-05 01:08:48 | INFO  | Task a08d6d8a-a2e0-443e-ba8e-aa7bb227cfe9 is in state STARTED 2026-04-05 01:08:48.796869 | orchestrator | 2026-04-05 01:08:48 | INFO  | Task 47948cf2-ea8f-4955-828f-00609beb94e2 is in state STARTED 2026-04-05 01:08:48.797836 | orchestrator | 2026-04-05 01:08:48 | INFO  | Task 14c3e237-9981-441e-9514-ac9399fea0b7 is in state STARTED 2026-04-05 01:08:48.797948 | orchestrator | 2026-04-05 01:08:48 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:08:51.836740 | orchestrator | 2026-04-05 01:08:51 | INFO  | Task b5e2ef28-6577-4f62-94bc-3cb1b5248189 is in state STARTED 2026-04-05 01:08:51.837045 | orchestrator | 2026-04-05 01:08:51 | INFO  | Task a08d6d8a-a2e0-443e-ba8e-aa7bb227cfe9 is in state STARTED 2026-04-05 01:08:51.838093 | orchestrator | 2026-04-05 01:08:51 | INFO  | Task 47948cf2-ea8f-4955-828f-00609beb94e2 is in state STARTED 2026-04-05 01:08:51.841073 | orchestrator | 2026-04-05 01:08:51 | INFO  | Task 14c3e237-9981-441e-9514-ac9399fea0b7 is in state STARTED 2026-04-05 01:08:51.841140 | orchestrator | 2026-04-05 01:08:51 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:08:54.878257 | orchestrator | 2026-04-05 01:08:54 | INFO  | Task b5e2ef28-6577-4f62-94bc-3cb1b5248189 is in state STARTED 2026-04-05 01:08:54.878702 | orchestrator | 2026-04-05 01:08:54 | INFO  | Task a08d6d8a-a2e0-443e-ba8e-aa7bb227cfe9 is in state STARTED 2026-04-05 01:08:54.879592 | orchestrator | 2026-04-05 01:08:54 | INFO  | Task 47948cf2-ea8f-4955-828f-00609beb94e2 is in state STARTED 2026-04-05 01:08:54.882612 | orchestrator | 2026-04-05 01:08:54 | INFO  | Task 14c3e237-9981-441e-9514-ac9399fea0b7 is in state STARTED 2026-04-05 01:08:54.882672 | orchestrator | 2026-04-05 01:08:54 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:08:57.951644 | orchestrator | 2026-04-05 01:08:57 | INFO  | Task b5e2ef28-6577-4f62-94bc-3cb1b5248189 is in state STARTED 2026-04-05 01:08:57.953861 | orchestrator | 2026-04-05 01:08:57 | INFO  | Task a08d6d8a-a2e0-443e-ba8e-aa7bb227cfe9 is in state STARTED 2026-04-05 01:08:57.954386 | orchestrator | 2026-04-05 01:08:57 | INFO  | Task 47948cf2-ea8f-4955-828f-00609beb94e2 is in state STARTED 2026-04-05 01:08:57.955970 | orchestrator | 2026-04-05 01:08:57 | INFO  | Task 14c3e237-9981-441e-9514-ac9399fea0b7 is in state STARTED 2026-04-05 01:08:57.956006 | orchestrator | 2026-04-05 01:08:57 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:09:00.992000 | orchestrator | 2026-04-05 01:09:00 | INFO  | Task b5e2ef28-6577-4f62-94bc-3cb1b5248189 is in state STARTED 2026-04-05 01:09:00.992260 | orchestrator | 2026-04-05 01:09:00 | INFO  | Task a08d6d8a-a2e0-443e-ba8e-aa7bb227cfe9 is in state STARTED 2026-04-05 01:09:00.996231 | orchestrator | 2026-04-05 01:09:00 | INFO  | Task 47948cf2-ea8f-4955-828f-00609beb94e2 is in state STARTED 2026-04-05 01:09:00.999544 | orchestrator | 2026-04-05 01:09:00 | INFO  | Task 14c3e237-9981-441e-9514-ac9399fea0b7 is in state STARTED 2026-04-05 01:09:00.999602 | orchestrator | 2026-04-05 01:09:00 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:09:04.031601 | orchestrator | 2026-04-05 01:09:04 | INFO  | Task b5e2ef28-6577-4f62-94bc-3cb1b5248189 is in state STARTED 2026-04-05 01:09:04.033574 | orchestrator | 2026-04-05 01:09:04 | INFO  | Task a08d6d8a-a2e0-443e-ba8e-aa7bb227cfe9 is in state STARTED 2026-04-05 01:09:04.034265 | orchestrator | 2026-04-05 01:09:04 | INFO  | Task 47948cf2-ea8f-4955-828f-00609beb94e2 is in state STARTED 2026-04-05 01:09:04.035394 | orchestrator | 2026-04-05 01:09:04 | INFO  | Task 14c3e237-9981-441e-9514-ac9399fea0b7 is in state STARTED 2026-04-05 01:09:04.036100 | orchestrator | 2026-04-05 01:09:04 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:09:07.092319 | orchestrator | 2026-04-05 01:09:07 | INFO  | Task b5e2ef28-6577-4f62-94bc-3cb1b5248189 is in state STARTED 2026-04-05 01:09:07.092641 | orchestrator | 2026-04-05 01:09:07 | INFO  | Task a08d6d8a-a2e0-443e-ba8e-aa7bb227cfe9 is in state STARTED 2026-04-05 01:09:07.093494 | orchestrator | 2026-04-05 01:09:07 | INFO  | Task 47948cf2-ea8f-4955-828f-00609beb94e2 is in state STARTED 2026-04-05 01:09:07.094838 | orchestrator | 2026-04-05 01:09:07 | INFO  | Task 14c3e237-9981-441e-9514-ac9399fea0b7 is in state SUCCESS 2026-04-05 01:09:07.095776 | orchestrator | 2026-04-05 01:09:07 | INFO  | Task 03750040-aa24-4e1c-9ab8-68a6c056ef7d is in state STARTED 2026-04-05 01:09:07.095809 | orchestrator | 2026-04-05 01:09:07 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:09:10.136642 | orchestrator | 2026-04-05 01:09:10 | INFO  | Task b5e2ef28-6577-4f62-94bc-3cb1b5248189 is in state STARTED 2026-04-05 01:09:10.137126 | orchestrator | 2026-04-05 01:09:10 | INFO  | Task a08d6d8a-a2e0-443e-ba8e-aa7bb227cfe9 is in state STARTED 2026-04-05 01:09:10.138441 | orchestrator | 2026-04-05 01:09:10 | INFO  | Task 47948cf2-ea8f-4955-828f-00609beb94e2 is in state STARTED 2026-04-05 01:09:10.140093 | orchestrator | 2026-04-05 01:09:10 | INFO  | Task 03750040-aa24-4e1c-9ab8-68a6c056ef7d is in state STARTED 2026-04-05 01:09:10.140141 | orchestrator | 2026-04-05 01:09:10 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:09:13.171027 | orchestrator | 2026-04-05 01:09:13 | INFO  | Task b5e2ef28-6577-4f62-94bc-3cb1b5248189 is in state STARTED 2026-04-05 01:09:13.172053 | orchestrator | 2026-04-05 01:09:13 | INFO  | Task a08d6d8a-a2e0-443e-ba8e-aa7bb227cfe9 is in state STARTED 2026-04-05 01:09:13.174221 | orchestrator | 2026-04-05 01:09:13 | INFO  | Task 47948cf2-ea8f-4955-828f-00609beb94e2 is in state STARTED 2026-04-05 01:09:13.176300 | orchestrator | 2026-04-05 01:09:13 | INFO  | Task 03750040-aa24-4e1c-9ab8-68a6c056ef7d is in state STARTED 2026-04-05 01:09:13.176346 | orchestrator | 2026-04-05 01:09:13 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:09:16.229860 | orchestrator | 2026-04-05 01:09:16 | INFO  | Task b5e2ef28-6577-4f62-94bc-3cb1b5248189 is in state STARTED 2026-04-05 01:09:16.232726 | orchestrator | 2026-04-05 01:09:16 | INFO  | Task a08d6d8a-a2e0-443e-ba8e-aa7bb227cfe9 is in state STARTED 2026-04-05 01:09:16.236808 | orchestrator | 2026-04-05 01:09:16 | INFO  | Task 47948cf2-ea8f-4955-828f-00609beb94e2 is in state STARTED 2026-04-05 01:09:16.238549 | orchestrator | 2026-04-05 01:09:16 | INFO  | Task 03750040-aa24-4e1c-9ab8-68a6c056ef7d is in state STARTED 2026-04-05 01:09:16.238678 | orchestrator | 2026-04-05 01:09:16 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:09:19.283881 | orchestrator | 2026-04-05 01:09:19 | INFO  | Task b5e2ef28-6577-4f62-94bc-3cb1b5248189 is in state STARTED 2026-04-05 01:09:19.286608 | orchestrator | 2026-04-05 01:09:19 | INFO  | Task a08d6d8a-a2e0-443e-ba8e-aa7bb227cfe9 is in state STARTED 2026-04-05 01:09:19.288219 | orchestrator | 2026-04-05 01:09:19 | INFO  | Task 47948cf2-ea8f-4955-828f-00609beb94e2 is in state STARTED 2026-04-05 01:09:19.292282 | orchestrator | 2026-04-05 01:09:19 | INFO  | Task 03750040-aa24-4e1c-9ab8-68a6c056ef7d is in state STARTED 2026-04-05 01:09:19.292336 | orchestrator | 2026-04-05 01:09:19 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:09:22.331317 | orchestrator | 2026-04-05 01:09:22 | INFO  | Task b5e2ef28-6577-4f62-94bc-3cb1b5248189 is in state STARTED 2026-04-05 01:09:22.332597 | orchestrator | 2026-04-05 01:09:22 | INFO  | Task a08d6d8a-a2e0-443e-ba8e-aa7bb227cfe9 is in state STARTED 2026-04-05 01:09:22.333579 | orchestrator | 2026-04-05 01:09:22 | INFO  | Task 47948cf2-ea8f-4955-828f-00609beb94e2 is in state STARTED 2026-04-05 01:09:22.333967 | orchestrator | 2026-04-05 01:09:22 | INFO  | Task 03750040-aa24-4e1c-9ab8-68a6c056ef7d is in state STARTED 2026-04-05 01:09:22.333999 | orchestrator | 2026-04-05 01:09:22 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:09:25.390308 | orchestrator | 2026-04-05 01:09:25 | INFO  | Task b5e2ef28-6577-4f62-94bc-3cb1b5248189 is in state STARTED 2026-04-05 01:09:25.390717 | orchestrator | 2026-04-05 01:09:25 | INFO  | Task a08d6d8a-a2e0-443e-ba8e-aa7bb227cfe9 is in state STARTED 2026-04-05 01:09:25.393531 | orchestrator | 2026-04-05 01:09:25 | INFO  | Task 47948cf2-ea8f-4955-828f-00609beb94e2 is in state STARTED 2026-04-05 01:09:25.393953 | orchestrator | 2026-04-05 01:09:25 | INFO  | Task 03750040-aa24-4e1c-9ab8-68a6c056ef7d is in state STARTED 2026-04-05 01:09:25.393984 | orchestrator | 2026-04-05 01:09:25 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:09:28.424570 | orchestrator | 2026-04-05 01:09:28 | INFO  | Task b5e2ef28-6577-4f62-94bc-3cb1b5248189 is in state STARTED 2026-04-05 01:09:28.424750 | orchestrator | 2026-04-05 01:09:28 | INFO  | Task a08d6d8a-a2e0-443e-ba8e-aa7bb227cfe9 is in state STARTED 2026-04-05 01:09:28.425598 | orchestrator | 2026-04-05 01:09:28 | INFO  | Task 47948cf2-ea8f-4955-828f-00609beb94e2 is in state STARTED 2026-04-05 01:09:28.426299 | orchestrator | 2026-04-05 01:09:28 | INFO  | Task 03750040-aa24-4e1c-9ab8-68a6c056ef7d is in state STARTED 2026-04-05 01:09:28.426332 | orchestrator | 2026-04-05 01:09:28 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:09:31.499260 | orchestrator | 2026-04-05 01:09:31 | INFO  | Task b5e2ef28-6577-4f62-94bc-3cb1b5248189 is in state STARTED 2026-04-05 01:09:31.499539 | orchestrator | 2026-04-05 01:09:31 | INFO  | Task a08d6d8a-a2e0-443e-ba8e-aa7bb227cfe9 is in state STARTED 2026-04-05 01:09:31.500215 | orchestrator | 2026-04-05 01:09:31 | INFO  | Task 47948cf2-ea8f-4955-828f-00609beb94e2 is in state STARTED 2026-04-05 01:09:31.500776 | orchestrator | 2026-04-05 01:09:31 | INFO  | Task 03750040-aa24-4e1c-9ab8-68a6c056ef7d is in state STARTED 2026-04-05 01:09:31.500799 | orchestrator | 2026-04-05 01:09:31 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:09:34.558624 | orchestrator | 2026-04-05 01:09:34 | INFO  | Task b5e2ef28-6577-4f62-94bc-3cb1b5248189 is in state STARTED 2026-04-05 01:09:34.560459 | orchestrator | 2026-04-05 01:09:34 | INFO  | Task a08d6d8a-a2e0-443e-ba8e-aa7bb227cfe9 is in state STARTED 2026-04-05 01:09:34.563686 | orchestrator | 2026-04-05 01:09:34 | INFO  | Task 47948cf2-ea8f-4955-828f-00609beb94e2 is in state STARTED 2026-04-05 01:09:34.563756 | orchestrator | 2026-04-05 01:09:34 | INFO  | Task 03750040-aa24-4e1c-9ab8-68a6c056ef7d is in state STARTED 2026-04-05 01:09:34.563780 | orchestrator | 2026-04-05 01:09:34 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:09:37.659367 | orchestrator | 2026-04-05 01:09:37 | INFO  | Task b5e2ef28-6577-4f62-94bc-3cb1b5248189 is in state STARTED 2026-04-05 01:09:37.659934 | orchestrator | 2026-04-05 01:09:37 | INFO  | Task a08d6d8a-a2e0-443e-ba8e-aa7bb227cfe9 is in state STARTED 2026-04-05 01:09:37.661237 | orchestrator | 2026-04-05 01:09:37 | INFO  | Task 47948cf2-ea8f-4955-828f-00609beb94e2 is in state STARTED 2026-04-05 01:09:37.662953 | orchestrator | 2026-04-05 01:09:37 | INFO  | Task 03750040-aa24-4e1c-9ab8-68a6c056ef7d is in state STARTED 2026-04-05 01:09:37.662981 | orchestrator | 2026-04-05 01:09:37 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:09:40.716605 | orchestrator | 2026-04-05 01:09:40 | INFO  | Task b5e2ef28-6577-4f62-94bc-3cb1b5248189 is in state STARTED 2026-04-05 01:09:40.718190 | orchestrator | 2026-04-05 01:09:40 | INFO  | Task a08d6d8a-a2e0-443e-ba8e-aa7bb227cfe9 is in state STARTED 2026-04-05 01:09:40.726853 | orchestrator | 2026-04-05 01:09:40 | INFO  | Task 47948cf2-ea8f-4955-828f-00609beb94e2 is in state SUCCESS 2026-04-05 01:09:40.728542 | orchestrator | 2026-04-05 01:09:40.728589 | orchestrator | 2026-04-05 01:09:40.728603 | orchestrator | PLAY [Download ironic ipa images] ********************************************** 2026-04-05 01:09:40.728615 | orchestrator | 2026-04-05 01:09:40.728626 | orchestrator | TASK [Ensure the destination directory exists] ********************************* 2026-04-05 01:09:40.728637 | orchestrator | Sunday 05 April 2026 01:07:51 +0000 (0:00:00.231) 0:00:00.231 ********** 2026-04-05 01:09:40.728701 | orchestrator | changed: [localhost] 2026-04-05 01:09:40.728714 | orchestrator | 2026-04-05 01:09:40.728725 | orchestrator | TASK [Download ironic-agent initramfs] ***************************************** 2026-04-05 01:09:40.728736 | orchestrator | Sunday 05 April 2026 01:07:53 +0000 (0:00:01.538) 0:00:01.770 ********** 2026-04-05 01:09:40.728747 | orchestrator | FAILED - RETRYING: [localhost]: Download ironic-agent initramfs (3 retries left). 2026-04-05 01:09:40.728758 | orchestrator | changed: [localhost] 2026-04-05 01:09:40.728768 | orchestrator | 2026-04-05 01:09:40.728779 | orchestrator | TASK [Download ironic-agent kernel] ******************************************** 2026-04-05 01:09:40.728790 | orchestrator | Sunday 05 April 2026 01:08:55 +0000 (0:01:02.015) 0:01:03.785 ********** 2026-04-05 01:09:40.728818 | orchestrator | changed: [localhost] 2026-04-05 01:09:40.728829 | orchestrator | 2026-04-05 01:09:40.728839 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-05 01:09:40.728850 | orchestrator | 2026-04-05 01:09:40.728861 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-05 01:09:40.728871 | orchestrator | Sunday 05 April 2026 01:09:02 +0000 (0:00:07.586) 0:01:11.371 ********** 2026-04-05 01:09:40.728882 | orchestrator | ok: [testbed-node-0] 2026-04-05 01:09:40.728892 | orchestrator | ok: [testbed-node-1] 2026-04-05 01:09:40.728903 | orchestrator | ok: [testbed-node-2] 2026-04-05 01:09:40.728914 | orchestrator | 2026-04-05 01:09:40.728924 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-05 01:09:40.728935 | orchestrator | Sunday 05 April 2026 01:09:03 +0000 (0:00:00.577) 0:01:11.949 ********** 2026-04-05 01:09:40.728946 | orchestrator | ok: [testbed-node-0] => (item=enable_ironic_False) 2026-04-05 01:09:40.728957 | orchestrator | ok: [testbed-node-1] => (item=enable_ironic_False) 2026-04-05 01:09:40.728968 | orchestrator | ok: [testbed-node-2] => (item=enable_ironic_False) 2026-04-05 01:09:40.728978 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: enable_ironic_True 2026-04-05 01:09:40.728989 | orchestrator | 2026-04-05 01:09:40.729000 | orchestrator | PLAY [Apply role ironic] ******************************************************* 2026-04-05 01:09:40.729074 | orchestrator | skipping: no hosts matched 2026-04-05 01:09:40.729086 | orchestrator | 2026-04-05 01:09:40.729097 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-05 01:09:40.729107 | orchestrator | localhost : ok=3  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-05 01:09:40.729120 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-05 01:09:40.729132 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-05 01:09:40.729146 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-05 01:09:40.729159 | orchestrator | 2026-04-05 01:09:40.729171 | orchestrator | 2026-04-05 01:09:40.729184 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-05 01:09:40.729197 | orchestrator | Sunday 05 April 2026 01:09:04 +0000 (0:00:00.645) 0:01:12.595 ********** 2026-04-05 01:09:40.729211 | orchestrator | =============================================================================== 2026-04-05 01:09:40.729245 | orchestrator | Download ironic-agent initramfs ---------------------------------------- 62.02s 2026-04-05 01:09:40.729264 | orchestrator | Download ironic-agent kernel -------------------------------------------- 7.58s 2026-04-05 01:09:40.729283 | orchestrator | Ensure the destination directory exists --------------------------------- 1.54s 2026-04-05 01:09:40.729302 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.65s 2026-04-05 01:09:40.729321 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.58s 2026-04-05 01:09:40.729341 | orchestrator | 2026-04-05 01:09:40.729363 | orchestrator | 2026-04-05 01:09:40.729397 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-05 01:09:40.729410 | orchestrator | 2026-04-05 01:09:40.729423 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-05 01:09:40.729436 | orchestrator | Sunday 05 April 2026 01:06:08 +0000 (0:00:00.680) 0:00:00.680 ********** 2026-04-05 01:09:40.729449 | orchestrator | ok: [testbed-node-0] 2026-04-05 01:09:40.729461 | orchestrator | ok: [testbed-node-1] 2026-04-05 01:09:40.729473 | orchestrator | ok: [testbed-node-2] 2026-04-05 01:09:40.729485 | orchestrator | 2026-04-05 01:09:40.729498 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-05 01:09:40.729512 | orchestrator | Sunday 05 April 2026 01:06:09 +0000 (0:00:00.328) 0:00:01.008 ********** 2026-04-05 01:09:40.729523 | orchestrator | ok: [testbed-node-0] => (item=enable_designate_True) 2026-04-05 01:09:40.729533 | orchestrator | ok: [testbed-node-1] => (item=enable_designate_True) 2026-04-05 01:09:40.729544 | orchestrator | ok: [testbed-node-2] => (item=enable_designate_True) 2026-04-05 01:09:40.729555 | orchestrator | 2026-04-05 01:09:40.729566 | orchestrator | PLAY [Apply role designate] **************************************************** 2026-04-05 01:09:40.729576 | orchestrator | 2026-04-05 01:09:40.729587 | orchestrator | TASK [designate : include_tasks] *********************************************** 2026-04-05 01:09:40.729611 | orchestrator | Sunday 05 April 2026 01:06:09 +0000 (0:00:00.325) 0:00:01.334 ********** 2026-04-05 01:09:40.729633 | orchestrator | included: /ansible/roles/designate/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 01:09:40.729652 | orchestrator | 2026-04-05 01:09:40.729687 | orchestrator | TASK [service-ks-register : designate | Creating services] ********************* 2026-04-05 01:09:40.729704 | orchestrator | Sunday 05 April 2026 01:06:10 +0000 (0:00:00.677) 0:00:02.011 ********** 2026-04-05 01:09:40.729722 | orchestrator | changed: [testbed-node-0] => (item=designate (dns)) 2026-04-05 01:09:40.729753 | orchestrator | 2026-04-05 01:09:40.729791 | orchestrator | TASK [service-ks-register : designate | Creating endpoints] ******************** 2026-04-05 01:09:40.729827 | orchestrator | Sunday 05 April 2026 01:06:14 +0000 (0:00:04.235) 0:00:06.247 ********** 2026-04-05 01:09:40.729864 | orchestrator | changed: [testbed-node-0] => (item=designate -> https://api-int.testbed.osism.xyz:9001 -> internal) 2026-04-05 01:09:40.729903 | orchestrator | changed: [testbed-node-0] => (item=designate -> https://api.testbed.osism.xyz:9001 -> public) 2026-04-05 01:09:40.729940 | orchestrator | 2026-04-05 01:09:40.729978 | orchestrator | TASK [service-ks-register : designate | Creating projects] ********************* 2026-04-05 01:09:40.730124 | orchestrator | Sunday 05 April 2026 01:06:21 +0000 (0:00:07.417) 0:00:13.664 ********** 2026-04-05 01:09:40.730171 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-04-05 01:09:40.730212 | orchestrator | 2026-04-05 01:09:40.730251 | orchestrator | TASK [service-ks-register : designate | Creating users] ************************ 2026-04-05 01:09:40.730290 | orchestrator | Sunday 05 April 2026 01:06:25 +0000 (0:00:03.837) 0:00:17.501 ********** 2026-04-05 01:09:40.730327 | orchestrator | changed: [testbed-node-0] => (item=designate -> service) 2026-04-05 01:09:40.730357 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-04-05 01:09:40.730465 | orchestrator | 2026-04-05 01:09:40.730497 | orchestrator | TASK [service-ks-register : designate | Creating roles] ************************ 2026-04-05 01:09:40.730523 | orchestrator | Sunday 05 April 2026 01:06:30 +0000 (0:00:04.379) 0:00:21.881 ********** 2026-04-05 01:09:40.730577 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-04-05 01:09:40.730601 | orchestrator | 2026-04-05 01:09:40.730627 | orchestrator | TASK [service-ks-register : designate | Granting user roles] ******************* 2026-04-05 01:09:40.730652 | orchestrator | Sunday 05 April 2026 01:06:34 +0000 (0:00:03.940) 0:00:25.822 ********** 2026-04-05 01:09:40.730678 | orchestrator | changed: [testbed-node-0] => (item=designate -> service -> admin) 2026-04-05 01:09:40.730702 | orchestrator | 2026-04-05 01:09:40.730727 | orchestrator | TASK [designate : Ensuring config directories exist] *************************** 2026-04-05 01:09:40.730753 | orchestrator | Sunday 05 April 2026 01:06:38 +0000 (0:00:04.548) 0:00:30.370 ********** 2026-04-05 01:09:40.730777 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-04-05 01:09:40.730799 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-05 01:09:40.730828 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-04-05 01:09:40.730863 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-04-05 01:09:40.730895 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-04-05 01:09:40.730913 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-05 01:09:40.730929 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-05 01:09:40.730940 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-04-05 01:09:40.730952 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-04-05 01:09:40.731064 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-04-05 01:09:40.731079 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-04-05 01:09:40.731110 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-04-05 01:09:40.731120 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-04-05 01:09:40.731194 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-04-05 01:09:40.731204 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-04-05 01:09:40.731219 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-05 01:09:40.731238 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-05 01:09:40.731256 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-05 01:09:40.731266 | orchestrator | 2026-04-05 01:09:40.731277 | orchestrator | TASK [designate : Check if policies shall be overwritten] ********************** 2026-04-05 01:09:40.731287 | orchestrator | Sunday 05 April 2026 01:06:43 +0000 (0:00:04.646) 0:00:35.016 ********** 2026-04-05 01:09:40.731297 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:09:40.731306 | orchestrator | 2026-04-05 01:09:40.731316 | orchestrator | TASK [designate : Set designate policy file] *********************************** 2026-04-05 01:09:40.731326 | orchestrator | Sunday 05 April 2026 01:06:43 +0000 (0:00:00.312) 0:00:35.328 ********** 2026-04-05 01:09:40.731335 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:09:40.731345 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:09:40.731354 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:09:40.731364 | orchestrator | 2026-04-05 01:09:40.731374 | orchestrator | TASK [designate : include_tasks] *********************************************** 2026-04-05 01:09:40.731408 | orchestrator | Sunday 05 April 2026 01:06:44 +0000 (0:00:01.238) 0:00:36.567 ********** 2026-04-05 01:09:40.731418 | orchestrator | included: /ansible/roles/designate/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 01:09:40.731428 | orchestrator | 2026-04-05 01:09:40.731438 | orchestrator | TASK [service-cert-copy : designate | Copying over extra CA certificates] ****** 2026-04-05 01:09:40.731448 | orchestrator | Sunday 05 April 2026 01:06:46 +0000 (0:00:01.752) 0:00:38.320 ********** 2026-04-05 01:09:40.731458 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-04-05 01:09:40.731469 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-04-05 01:09:40.731492 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-04-05 01:09:40.731511 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-05 01:09:40.731522 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-05 01:09:40.731532 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-05 01:09:40.731586 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-04-05 01:09:40.731599 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-04-05 01:09:40.731620 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-04-05 01:09:40.731646 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-04-05 01:09:40.731657 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-04-05 01:09:40.731667 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-04-05 01:09:40.731677 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-04-05 01:09:40.731687 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-04-05 01:09:40.731697 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-04-05 01:09:40.731753 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-05 01:09:40.731765 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-05 01:09:40.731775 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-05 01:09:40.731785 | orchestrator | 2026-04-05 01:09:40.731795 | orchestrator | TASK [service-cert-copy : designate | Copying over backend internal TLS certificate] *** 2026-04-05 01:09:40.731805 | orchestrator | Sunday 05 April 2026 01:06:54 +0000 (0:00:08.080) 0:00:46.400 ********** 2026-04-05 01:09:40.731815 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-04-05 01:09:40.731826 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-04-05 01:09:40.731836 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-05 01:09:40.731862 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-05 01:09:40.731873 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-05 01:09:40.731883 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-04-05 01:09:40.731893 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:09:40.731903 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-04-05 01:09:40.731913 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-04-05 01:09:40.731924 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-05 01:09:40.731944 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-05 01:09:40.731961 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-05 01:09:40.731972 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-04-05 01:09:40.731982 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:09:40.731992 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-04-05 01:09:40.732002 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-04-05 01:09:40.732013 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-05 01:09:40.732029 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-05 01:09:40.732051 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-05 01:09:40.732063 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-04-05 01:09:40.732073 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:09:40.732083 | orchestrator | 2026-04-05 01:09:40.732093 | orchestrator | TASK [service-cert-copy : designate | Copying over backend internal TLS key] *** 2026-04-05 01:09:40.732102 | orchestrator | Sunday 05 April 2026 01:06:57 +0000 (0:00:02.468) 0:00:48.869 ********** 2026-04-05 01:09:40.732112 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-04-05 01:09:40.732123 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-04-05 01:09:40.732139 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-05 01:09:40.732149 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-05 01:09:40.732170 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-05 01:09:40.732181 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-04-05 01:09:40.732191 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:09:40.732201 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-04-05 01:09:40.732211 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-04-05 01:09:40.732228 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-05 01:09:40.732239 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-05 01:09:40.732259 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-05 01:09:40.732270 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-04-05 01:09:40.732279 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:09:40.732290 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-04-05 01:09:40.732300 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-04-05 01:09:40.732316 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-05 01:09:40.732326 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-05 01:09:40.732340 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-05 01:09:40.732357 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-04-05 01:09:40.732367 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:09:40.732397 | orchestrator | 2026-04-05 01:09:40.732407 | orchestrator | TASK [designate : Copying over config.json files for services] ***************** 2026-04-05 01:09:40.732418 | orchestrator | Sunday 05 April 2026 01:06:59 +0000 (0:00:02.290) 0:00:51.160 ********** 2026-04-05 01:09:40.732427 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-04-05 01:09:40.732438 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-04-05 01:09:40.732458 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-04-05 01:09:40.732472 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-05 01:09:40.732489 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-05 01:09:40.732500 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-05 01:09:40.732509 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-04-05 01:09:40.732525 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-04-05 01:09:40.732535 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-04-05 01:09:40.732545 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-04-05 01:09:40.732565 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-04-05 01:09:40.732576 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-04-05 01:09:40.732586 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-04-05 01:09:40.732596 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-04-05 01:09:40.732612 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-04-05 01:09:40.733051 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-05 01:09:40.733068 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-05 01:09:40.733085 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-05 01:09:40.733095 | orchestrator | 2026-04-05 01:09:40.733105 | orchestrator | TASK [designate : Copying over designate.conf] ********************************* 2026-04-05 01:09:40.733115 | orchestrator | Sunday 05 April 2026 01:07:07 +0000 (0:00:08.250) 0:00:59.411 ********** 2026-04-05 01:09:40.733125 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-04-05 01:09:40.733135 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-04-05 01:09:40.733153 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-04-05 01:09:40.733169 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-05 01:09:40.733185 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-05 01:09:40.733195 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-05 01:09:40.733205 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-04-05 01:09:40.733221 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-04-05 01:09:40.733231 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-04-05 01:09:40.733246 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-04-05 01:09:40.733257 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-04-05 01:09:40.733267 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-04-05 01:09:40.733277 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-04-05 01:09:40.733317 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-04-05 01:09:40.733334 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-04-05 01:09:40.733344 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-05 01:09:40.733362 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-05 01:09:40.733401 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-05 01:09:40.733430 | orchestrator | 2026-04-05 01:09:40.733449 | orchestrator | TASK [designate : Copying over pools.yaml] ************************************* 2026-04-05 01:09:40.733465 | orchestrator | Sunday 05 April 2026 01:07:27 +0000 (0:00:20.188) 0:01:19.600 ********** 2026-04-05 01:09:40.733487 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2026-04-05 01:09:40.733504 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2026-04-05 01:09:40.733519 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2026-04-05 01:09:40.733534 | orchestrator | 2026-04-05 01:09:40.733550 | orchestrator | TASK [designate : Copying over named.conf] ************************************* 2026-04-05 01:09:40.733565 | orchestrator | Sunday 05 April 2026 01:07:34 +0000 (0:00:06.577) 0:01:26.177 ********** 2026-04-05 01:09:40.733581 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/designate/templates/named.conf.j2) 2026-04-05 01:09:40.733598 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/designate/templates/named.conf.j2) 2026-04-05 01:09:40.733614 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/designate/templates/named.conf.j2) 2026-04-05 01:09:40.733642 | orchestrator | 2026-04-05 01:09:40.733658 | orchestrator | TASK [designate : Copying over rndc.conf] ************************************** 2026-04-05 01:09:40.733675 | orchestrator | Sunday 05 April 2026 01:07:38 +0000 (0:00:03.993) 0:01:30.171 ********** 2026-04-05 01:09:40.733688 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-04-05 01:09:40.733702 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-04-05 01:09:40.733724 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-04-05 01:09:40.733741 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-05 01:09:40.733754 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-05 01:09:40.733772 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-05 01:09:40.733784 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-05 01:09:40.733797 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-05 01:09:40.733809 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-05 01:09:40.733827 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-05 01:09:40.733847 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-05 01:09:40.733864 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-05 01:09:40.733890 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-05 01:09:40.733909 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-05 01:09:40.733925 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-05 01:09:40.733942 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-05 01:09:40.734003 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-05 01:09:40.734090 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-05 01:09:40.734111 | orchestrator | 2026-04-05 01:09:40.734122 | orchestrator | TASK [designate : Copying over rndc.key] *************************************** 2026-04-05 01:09:40.734131 | orchestrator | Sunday 05 April 2026 01:07:42 +0000 (0:00:03.643) 0:01:33.815 ********** 2026-04-05 01:09:40.734142 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-04-05 01:09:40.734152 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-04-05 01:09:40.734162 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-04-05 01:09:40.734179 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-05 01:09:40.734194 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-05 01:09:40.734211 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-05 01:09:40.734225 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-05 01:09:40.734242 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-05 01:09:40.734259 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-05 01:09:40.734276 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-05 01:09:40.734299 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-05 01:09:40.734323 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-05 01:09:40.734367 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-05 01:09:40.734416 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-05 01:09:40.734432 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-05 01:09:40.734449 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-05 01:09:40.734476 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-05 01:09:40.734494 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-05 01:09:40.734521 | orchestrator | 2026-04-05 01:09:40.734537 | orchestrator | TASK [designate : include_tasks] *********************************************** 2026-04-05 01:09:40.734553 | orchestrator | Sunday 05 April 2026 01:07:45 +0000 (0:00:03.485) 0:01:37.300 ********** 2026-04-05 01:09:40.734570 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:09:40.734594 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:09:40.734610 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:09:40.734625 | orchestrator | 2026-04-05 01:09:40.734639 | orchestrator | TASK [designate : Copying over existing policy file] *************************** 2026-04-05 01:09:40.734653 | orchestrator | Sunday 05 April 2026 01:07:46 +0000 (0:00:00.685) 0:01:37.986 ********** 2026-04-05 01:09:40.734669 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-04-05 01:09:40.734686 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-04-05 01:09:40.734702 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-05 01:09:40.734720 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-05 01:09:40.734749 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-05 01:09:40.734782 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-04-05 01:09:40.734808 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:09:40.734826 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-04-05 01:09:40.734837 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-04-05 01:09:40.734847 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-05 01:09:40.734862 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-05 01:09:40.734889 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-05 01:09:40.734916 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-04-05 01:09:40.734933 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:09:40.734958 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-04-05 01:09:40.734977 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-04-05 01:09:40.734995 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-05 01:09:40.735013 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-05 01:09:40.735030 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-05 01:09:40.735077 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-04-05 01:09:40.735095 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:09:40.735111 | orchestrator | 2026-04-05 01:09:40.735127 | orchestrator | TASK [designate : Check designate containers] ********************************** 2026-04-05 01:09:40.735141 | orchestrator | Sunday 05 April 2026 01:07:47 +0000 (0:00:01.812) 0:01:39.798 ********** 2026-04-05 01:09:40.735170 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-04-05 01:09:40.735188 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-04-05 01:09:40.735203 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-04-05 01:09:40.735220 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-05 01:09:40.735257 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-05 01:09:40.735281 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-05 01:09:40.735298 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-04-05 01:09:40.735313 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-04-05 01:09:40.735323 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-04-05 01:09:40.735333 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-04-05 01:09:40.735355 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-04-05 01:09:40.735366 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-04-05 01:09:40.735410 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-04-05 01:09:40.735430 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-04-05 01:09:40.735448 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-04-05 01:09:40.735465 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-05 01:09:40.735482 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-05 01:09:40.735511 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-05 01:09:40.735521 | orchestrator | 2026-04-05 01:09:40.735532 | orchestrator | TASK [designate : include_tasks] *********************************************** 2026-04-05 01:09:40.735541 | orchestrator | Sunday 05 April 2026 01:07:53 +0000 (0:00:05.421) 0:01:45.220 ********** 2026-04-05 01:09:40.735551 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:09:40.735561 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:09:40.735570 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:09:40.735580 | orchestrator | 2026-04-05 01:09:40.735589 | orchestrator | TASK [designate : Creating Designate databases] ******************************** 2026-04-05 01:09:40.735599 | orchestrator | Sunday 05 April 2026 01:07:53 +0000 (0:00:00.558) 0:01:45.779 ********** 2026-04-05 01:09:40.735609 | orchestrator | changed: [testbed-node-0] => (item=designate) 2026-04-05 01:09:40.735619 | orchestrator | 2026-04-05 01:09:40.735628 | orchestrator | TASK [designate : Creating Designate databases user and setting permissions] *** 2026-04-05 01:09:40.735637 | orchestrator | Sunday 05 April 2026 01:07:56 +0000 (0:00:02.601) 0:01:48.380 ********** 2026-04-05 01:09:40.735647 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-04-05 01:09:40.735657 | orchestrator | changed: [testbed-node-0 -> {{ groups['designate-central'][0] }}] 2026-04-05 01:09:40.735666 | orchestrator | 2026-04-05 01:09:40.735681 | orchestrator | TASK [designate : Running Designate bootstrap container] *********************** 2026-04-05 01:09:40.735691 | orchestrator | Sunday 05 April 2026 01:07:59 +0000 (0:00:02.645) 0:01:51.026 ********** 2026-04-05 01:09:40.735700 | orchestrator | changed: [testbed-node-0] 2026-04-05 01:09:40.735710 | orchestrator | 2026-04-05 01:09:40.735719 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2026-04-05 01:09:40.735729 | orchestrator | Sunday 05 April 2026 01:08:15 +0000 (0:00:16.537) 0:02:07.564 ********** 2026-04-05 01:09:40.735738 | orchestrator | 2026-04-05 01:09:40.735748 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2026-04-05 01:09:40.735757 | orchestrator | Sunday 05 April 2026 01:08:15 +0000 (0:00:00.067) 0:02:07.632 ********** 2026-04-05 01:09:40.735766 | orchestrator | 2026-04-05 01:09:40.735776 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2026-04-05 01:09:40.735785 | orchestrator | Sunday 05 April 2026 01:08:15 +0000 (0:00:00.063) 0:02:07.695 ********** 2026-04-05 01:09:40.735795 | orchestrator | 2026-04-05 01:09:40.735804 | orchestrator | RUNNING HANDLER [designate : Restart designate-backend-bind9 container] ******** 2026-04-05 01:09:40.735813 | orchestrator | Sunday 05 April 2026 01:08:15 +0000 (0:00:00.065) 0:02:07.761 ********** 2026-04-05 01:09:40.735823 | orchestrator | changed: [testbed-node-0] 2026-04-05 01:09:40.735832 | orchestrator | changed: [testbed-node-1] 2026-04-05 01:09:40.735842 | orchestrator | changed: [testbed-node-2] 2026-04-05 01:09:40.735851 | orchestrator | 2026-04-05 01:09:40.735861 | orchestrator | RUNNING HANDLER [designate : Restart designate-api container] ****************** 2026-04-05 01:09:40.735877 | orchestrator | Sunday 05 April 2026 01:08:29 +0000 (0:00:13.453) 0:02:21.214 ********** 2026-04-05 01:09:40.735887 | orchestrator | changed: [testbed-node-0] 2026-04-05 01:09:40.735897 | orchestrator | changed: [testbed-node-1] 2026-04-05 01:09:40.735906 | orchestrator | changed: [testbed-node-2] 2026-04-05 01:09:40.735916 | orchestrator | 2026-04-05 01:09:40.735925 | orchestrator | RUNNING HANDLER [designate : Restart designate-central container] ************** 2026-04-05 01:09:40.735935 | orchestrator | Sunday 05 April 2026 01:08:41 +0000 (0:00:11.688) 0:02:32.902 ********** 2026-04-05 01:09:40.735945 | orchestrator | changed: [testbed-node-0] 2026-04-05 01:09:40.735954 | orchestrator | changed: [testbed-node-1] 2026-04-05 01:09:40.735964 | orchestrator | changed: [testbed-node-2] 2026-04-05 01:09:40.735973 | orchestrator | 2026-04-05 01:09:40.735983 | orchestrator | RUNNING HANDLER [designate : Restart designate-producer container] ************* 2026-04-05 01:09:40.735993 | orchestrator | Sunday 05 April 2026 01:08:54 +0000 (0:00:12.971) 0:02:45.874 ********** 2026-04-05 01:09:40.736002 | orchestrator | changed: [testbed-node-2] 2026-04-05 01:09:40.736012 | orchestrator | changed: [testbed-node-1] 2026-04-05 01:09:40.736021 | orchestrator | changed: [testbed-node-0] 2026-04-05 01:09:40.736031 | orchestrator | 2026-04-05 01:09:40.736040 | orchestrator | RUNNING HANDLER [designate : Restart designate-mdns container] ***************** 2026-04-05 01:09:40.736050 | orchestrator | Sunday 05 April 2026 01:09:07 +0000 (0:00:12.958) 0:02:58.833 ********** 2026-04-05 01:09:40.736059 | orchestrator | changed: [testbed-node-0] 2026-04-05 01:09:40.736069 | orchestrator | changed: [testbed-node-1] 2026-04-05 01:09:40.736078 | orchestrator | changed: [testbed-node-2] 2026-04-05 01:09:40.736088 | orchestrator | 2026-04-05 01:09:40.736097 | orchestrator | RUNNING HANDLER [designate : Restart designate-worker container] *************** 2026-04-05 01:09:40.736107 | orchestrator | Sunday 05 April 2026 01:09:22 +0000 (0:00:15.134) 0:03:13.967 ********** 2026-04-05 01:09:40.736117 | orchestrator | changed: [testbed-node-0] 2026-04-05 01:09:40.736126 | orchestrator | changed: [testbed-node-2] 2026-04-05 01:09:40.736136 | orchestrator | changed: [testbed-node-1] 2026-04-05 01:09:40.736145 | orchestrator | 2026-04-05 01:09:40.736155 | orchestrator | TASK [designate : Non-destructive DNS pools update] **************************** 2026-04-05 01:09:40.736164 | orchestrator | Sunday 05 April 2026 01:09:31 +0000 (0:00:09.332) 0:03:23.300 ********** 2026-04-05 01:09:40.736174 | orchestrator | changed: [testbed-node-0] 2026-04-05 01:09:40.736183 | orchestrator | 2026-04-05 01:09:40.736193 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-05 01:09:40.736210 | orchestrator | testbed-node-0 : ok=29  changed=23  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-04-05 01:09:40.736227 | orchestrator | testbed-node-1 : ok=19  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-04-05 01:09:40.736244 | orchestrator | testbed-node-2 : ok=19  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-04-05 01:09:40.736259 | orchestrator | 2026-04-05 01:09:40.736300 | orchestrator | 2026-04-05 01:09:40.736340 | orchestrator | TASKS RECAP *****************************************************************2026-04-05 01:09:40 | INFO  | Task 03750040-aa24-4e1c-9ab8-68a6c056ef7d is in state STARTED 2026-04-05 01:09:40.736358 | orchestrator | 2026-04-05 01:09:40 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:09:40.736375 | orchestrator | *** 2026-04-05 01:09:40.736419 | orchestrator | Sunday 05 April 2026 01:09:39 +0000 (0:00:08.487) 0:03:31.787 ********** 2026-04-05 01:09:40.736437 | orchestrator | =============================================================================== 2026-04-05 01:09:40.736450 | orchestrator | designate : Copying over designate.conf -------------------------------- 20.19s 2026-04-05 01:09:40.736459 | orchestrator | designate : Running Designate bootstrap container ---------------------- 16.54s 2026-04-05 01:09:40.736474 | orchestrator | designate : Restart designate-mdns container --------------------------- 15.13s 2026-04-05 01:09:40.736508 | orchestrator | designate : Restart designate-backend-bind9 container ------------------ 13.45s 2026-04-05 01:09:40.736529 | orchestrator | designate : Restart designate-central container ------------------------ 12.97s 2026-04-05 01:09:40.736544 | orchestrator | designate : Restart designate-producer container ----------------------- 12.96s 2026-04-05 01:09:40.736566 | orchestrator | designate : Restart designate-api container ---------------------------- 11.69s 2026-04-05 01:09:40.736583 | orchestrator | designate : Restart designate-worker container -------------------------- 9.33s 2026-04-05 01:09:40.736597 | orchestrator | designate : Non-destructive DNS pools update ---------------------------- 8.49s 2026-04-05 01:09:40.736611 | orchestrator | designate : Copying over config.json files for services ----------------- 8.25s 2026-04-05 01:09:40.736626 | orchestrator | service-cert-copy : designate | Copying over extra CA certificates ------ 8.08s 2026-04-05 01:09:40.736642 | orchestrator | service-ks-register : designate | Creating endpoints -------------------- 7.42s 2026-04-05 01:09:40.736658 | orchestrator | designate : Copying over pools.yaml ------------------------------------- 6.58s 2026-04-05 01:09:40.736673 | orchestrator | designate : Check designate containers ---------------------------------- 5.42s 2026-04-05 01:09:40.736688 | orchestrator | designate : Ensuring config directories exist --------------------------- 4.65s 2026-04-05 01:09:40.736703 | orchestrator | service-ks-register : designate | Granting user roles ------------------- 4.55s 2026-04-05 01:09:40.736720 | orchestrator | service-ks-register : designate | Creating users ------------------------ 4.38s 2026-04-05 01:09:40.736736 | orchestrator | service-ks-register : designate | Creating services --------------------- 4.24s 2026-04-05 01:09:40.736753 | orchestrator | designate : Copying over named.conf ------------------------------------- 3.99s 2026-04-05 01:09:40.736770 | orchestrator | service-ks-register : designate | Creating roles ------------------------ 3.94s 2026-04-05 01:09:43.757801 | orchestrator | 2026-04-05 01:09:43 | INFO  | Task b5e2ef28-6577-4f62-94bc-3cb1b5248189 is in state STARTED 2026-04-05 01:09:43.757911 | orchestrator | 2026-04-05 01:09:43 | INFO  | Task a08d6d8a-a2e0-443e-ba8e-aa7bb227cfe9 is in state STARTED 2026-04-05 01:09:43.758308 | orchestrator | 2026-04-05 01:09:43 | INFO  | Task 9e88a4a6-3dbc-4a0f-b508-16a3cd81a261 is in state STARTED 2026-04-05 01:09:43.759088 | orchestrator | 2026-04-05 01:09:43 | INFO  | Task 03750040-aa24-4e1c-9ab8-68a6c056ef7d is in state STARTED 2026-04-05 01:09:43.759111 | orchestrator | 2026-04-05 01:09:43 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:09:46.789584 | orchestrator | 2026-04-05 01:09:46 | INFO  | Task b5e2ef28-6577-4f62-94bc-3cb1b5248189 is in state STARTED 2026-04-05 01:09:46.792041 | orchestrator | 2026-04-05 01:09:46 | INFO  | Task a08d6d8a-a2e0-443e-ba8e-aa7bb227cfe9 is in state STARTED 2026-04-05 01:09:46.792617 | orchestrator | 2026-04-05 01:09:46 | INFO  | Task 9e88a4a6-3dbc-4a0f-b508-16a3cd81a261 is in state STARTED 2026-04-05 01:09:46.793783 | orchestrator | 2026-04-05 01:09:46 | INFO  | Task 03750040-aa24-4e1c-9ab8-68a6c056ef7d is in state STARTED 2026-04-05 01:09:46.793846 | orchestrator | 2026-04-05 01:09:46 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:09:49.850665 | orchestrator | 2026-04-05 01:09:49 | INFO  | Task b5e2ef28-6577-4f62-94bc-3cb1b5248189 is in state STARTED 2026-04-05 01:09:49.851591 | orchestrator | 2026-04-05 01:09:49 | INFO  | Task a08d6d8a-a2e0-443e-ba8e-aa7bb227cfe9 is in state STARTED 2026-04-05 01:09:49.853130 | orchestrator | 2026-04-05 01:09:49 | INFO  | Task 9e88a4a6-3dbc-4a0f-b508-16a3cd81a261 is in state STARTED 2026-04-05 01:09:49.854772 | orchestrator | 2026-04-05 01:09:49 | INFO  | Task 03750040-aa24-4e1c-9ab8-68a6c056ef7d is in state STARTED 2026-04-05 01:09:49.854823 | orchestrator | 2026-04-05 01:09:49 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:09:52.902502 | orchestrator | 2026-04-05 01:09:52 | INFO  | Task b5e2ef28-6577-4f62-94bc-3cb1b5248189 is in state STARTED 2026-04-05 01:09:52.905170 | orchestrator | 2026-04-05 01:09:52 | INFO  | Task a08d6d8a-a2e0-443e-ba8e-aa7bb227cfe9 is in state STARTED 2026-04-05 01:09:52.908471 | orchestrator | 2026-04-05 01:09:52 | INFO  | Task 9e88a4a6-3dbc-4a0f-b508-16a3cd81a261 is in state STARTED 2026-04-05 01:09:52.909918 | orchestrator | 2026-04-05 01:09:52 | INFO  | Task 03750040-aa24-4e1c-9ab8-68a6c056ef7d is in state STARTED 2026-04-05 01:09:52.910002 | orchestrator | 2026-04-05 01:09:52 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:09:55.952476 | orchestrator | 2026-04-05 01:09:55 | INFO  | Task b5e2ef28-6577-4f62-94bc-3cb1b5248189 is in state STARTED 2026-04-05 01:09:55.954107 | orchestrator | 2026-04-05 01:09:55 | INFO  | Task a08d6d8a-a2e0-443e-ba8e-aa7bb227cfe9 is in state STARTED 2026-04-05 01:09:55.955163 | orchestrator | 2026-04-05 01:09:55 | INFO  | Task 9e88a4a6-3dbc-4a0f-b508-16a3cd81a261 is in state STARTED 2026-04-05 01:09:55.956080 | orchestrator | 2026-04-05 01:09:55 | INFO  | Task 03750040-aa24-4e1c-9ab8-68a6c056ef7d is in state STARTED 2026-04-05 01:09:55.956104 | orchestrator | 2026-04-05 01:09:55 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:09:58.991567 | orchestrator | 2026-04-05 01:09:58 | INFO  | Task b5e2ef28-6577-4f62-94bc-3cb1b5248189 is in state STARTED 2026-04-05 01:09:58.993187 | orchestrator | 2026-04-05 01:09:58 | INFO  | Task a08d6d8a-a2e0-443e-ba8e-aa7bb227cfe9 is in state STARTED 2026-04-05 01:09:58.993962 | orchestrator | 2026-04-05 01:09:58 | INFO  | Task 9e88a4a6-3dbc-4a0f-b508-16a3cd81a261 is in state STARTED 2026-04-05 01:09:58.995141 | orchestrator | 2026-04-05 01:09:58 | INFO  | Task 03750040-aa24-4e1c-9ab8-68a6c056ef7d is in state STARTED 2026-04-05 01:09:58.995176 | orchestrator | 2026-04-05 01:09:58 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:10:02.050750 | orchestrator | 2026-04-05 01:10:02 | INFO  | Task b5e2ef28-6577-4f62-94bc-3cb1b5248189 is in state STARTED 2026-04-05 01:10:02.053688 | orchestrator | 2026-04-05 01:10:02 | INFO  | Task a08d6d8a-a2e0-443e-ba8e-aa7bb227cfe9 is in state STARTED 2026-04-05 01:10:02.061358 | orchestrator | 2026-04-05 01:10:02 | INFO  | Task 9e88a4a6-3dbc-4a0f-b508-16a3cd81a261 is in state STARTED 2026-04-05 01:10:02.062769 | orchestrator | 2026-04-05 01:10:02 | INFO  | Task 03750040-aa24-4e1c-9ab8-68a6c056ef7d is in state STARTED 2026-04-05 01:10:02.062812 | orchestrator | 2026-04-05 01:10:02 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:10:05.113589 | orchestrator | 2026-04-05 01:10:05 | INFO  | Task b5e2ef28-6577-4f62-94bc-3cb1b5248189 is in state STARTED 2026-04-05 01:10:05.115854 | orchestrator | 2026-04-05 01:10:05 | INFO  | Task a08d6d8a-a2e0-443e-ba8e-aa7bb227cfe9 is in state STARTED 2026-04-05 01:10:05.118804 | orchestrator | 2026-04-05 01:10:05 | INFO  | Task 9e88a4a6-3dbc-4a0f-b508-16a3cd81a261 is in state STARTED 2026-04-05 01:10:05.120839 | orchestrator | 2026-04-05 01:10:05 | INFO  | Task 03750040-aa24-4e1c-9ab8-68a6c056ef7d is in state STARTED 2026-04-05 01:10:05.120883 | orchestrator | 2026-04-05 01:10:05 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:10:08.162223 | orchestrator | 2026-04-05 01:10:08 | INFO  | Task b5e2ef28-6577-4f62-94bc-3cb1b5248189 is in state STARTED 2026-04-05 01:10:08.163265 | orchestrator | 2026-04-05 01:10:08 | INFO  | Task a08d6d8a-a2e0-443e-ba8e-aa7bb227cfe9 is in state STARTED 2026-04-05 01:10:08.164263 | orchestrator | 2026-04-05 01:10:08 | INFO  | Task 9e88a4a6-3dbc-4a0f-b508-16a3cd81a261 is in state STARTED 2026-04-05 01:10:08.166608 | orchestrator | 2026-04-05 01:10:08 | INFO  | Task 03750040-aa24-4e1c-9ab8-68a6c056ef7d is in state STARTED 2026-04-05 01:10:08.166693 | orchestrator | 2026-04-05 01:10:08 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:10:11.215196 | orchestrator | 2026-04-05 01:10:11 | INFO  | Task b5e2ef28-6577-4f62-94bc-3cb1b5248189 is in state STARTED 2026-04-05 01:10:11.216503 | orchestrator | 2026-04-05 01:10:11 | INFO  | Task a08d6d8a-a2e0-443e-ba8e-aa7bb227cfe9 is in state STARTED 2026-04-05 01:10:11.218302 | orchestrator | 2026-04-05 01:10:11 | INFO  | Task 9e88a4a6-3dbc-4a0f-b508-16a3cd81a261 is in state STARTED 2026-04-05 01:10:11.219555 | orchestrator | 2026-04-05 01:10:11 | INFO  | Task 03750040-aa24-4e1c-9ab8-68a6c056ef7d is in state STARTED 2026-04-05 01:10:11.219615 | orchestrator | 2026-04-05 01:10:11 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:10:14.256904 | orchestrator | 2026-04-05 01:10:14 | INFO  | Task b5e2ef28-6577-4f62-94bc-3cb1b5248189 is in state STARTED 2026-04-05 01:10:14.257377 | orchestrator | 2026-04-05 01:10:14 | INFO  | Task a08d6d8a-a2e0-443e-ba8e-aa7bb227cfe9 is in state STARTED 2026-04-05 01:10:14.258247 | orchestrator | 2026-04-05 01:10:14 | INFO  | Task 9e88a4a6-3dbc-4a0f-b508-16a3cd81a261 is in state STARTED 2026-04-05 01:10:14.259077 | orchestrator | 2026-04-05 01:10:14 | INFO  | Task 03750040-aa24-4e1c-9ab8-68a6c056ef7d is in state STARTED 2026-04-05 01:10:14.259288 | orchestrator | 2026-04-05 01:10:14 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:10:17.304870 | orchestrator | 2026-04-05 01:10:17 | INFO  | Task b5e2ef28-6577-4f62-94bc-3cb1b5248189 is in state STARTED 2026-04-05 01:10:17.306679 | orchestrator | 2026-04-05 01:10:17 | INFO  | Task a08d6d8a-a2e0-443e-ba8e-aa7bb227cfe9 is in state STARTED 2026-04-05 01:10:17.308280 | orchestrator | 2026-04-05 01:10:17 | INFO  | Task 9e88a4a6-3dbc-4a0f-b508-16a3cd81a261 is in state STARTED 2026-04-05 01:10:17.310710 | orchestrator | 2026-04-05 01:10:17 | INFO  | Task 03750040-aa24-4e1c-9ab8-68a6c056ef7d is in state STARTED 2026-04-05 01:10:17.310797 | orchestrator | 2026-04-05 01:10:17 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:10:20.397184 | orchestrator | 2026-04-05 01:10:20 | INFO  | Task b5e2ef28-6577-4f62-94bc-3cb1b5248189 is in state STARTED 2026-04-05 01:10:20.398278 | orchestrator | 2026-04-05 01:10:20 | INFO  | Task a08d6d8a-a2e0-443e-ba8e-aa7bb227cfe9 is in state SUCCESS 2026-04-05 01:10:20.399790 | orchestrator | 2026-04-05 01:10:20.399825 | orchestrator | 2026-04-05 01:10:20.399837 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-05 01:10:20.399849 | orchestrator | 2026-04-05 01:10:20.399860 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-05 01:10:20.399872 | orchestrator | Sunday 05 April 2026 01:05:23 +0000 (0:00:00.314) 0:00:00.314 ********** 2026-04-05 01:10:20.399891 | orchestrator | ok: [testbed-node-0] 2026-04-05 01:10:20.399912 | orchestrator | ok: [testbed-node-1] 2026-04-05 01:10:20.399941 | orchestrator | ok: [testbed-node-2] 2026-04-05 01:10:20.399965 | orchestrator | ok: [testbed-node-3] 2026-04-05 01:10:20.399985 | orchestrator | ok: [testbed-node-4] 2026-04-05 01:10:20.400004 | orchestrator | ok: [testbed-node-5] 2026-04-05 01:10:20.400023 | orchestrator | 2026-04-05 01:10:20.400045 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-05 01:10:20.400065 | orchestrator | Sunday 05 April 2026 01:05:24 +0000 (0:00:00.508) 0:00:00.823 ********** 2026-04-05 01:10:20.400086 | orchestrator | ok: [testbed-node-0] => (item=enable_neutron_True) 2026-04-05 01:10:20.400119 | orchestrator | ok: [testbed-node-1] => (item=enable_neutron_True) 2026-04-05 01:10:20.400139 | orchestrator | ok: [testbed-node-2] => (item=enable_neutron_True) 2026-04-05 01:10:20.400150 | orchestrator | ok: [testbed-node-3] => (item=enable_neutron_True) 2026-04-05 01:10:20.400161 | orchestrator | ok: [testbed-node-4] => (item=enable_neutron_True) 2026-04-05 01:10:20.400204 | orchestrator | ok: [testbed-node-5] => (item=enable_neutron_True) 2026-04-05 01:10:20.400222 | orchestrator | 2026-04-05 01:10:20.400240 | orchestrator | PLAY [Apply role neutron] ****************************************************** 2026-04-05 01:10:20.400257 | orchestrator | 2026-04-05 01:10:20.400274 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2026-04-05 01:10:20.400292 | orchestrator | Sunday 05 April 2026 01:05:24 +0000 (0:00:00.639) 0:00:01.462 ********** 2026-04-05 01:10:20.400314 | orchestrator | included: /ansible/roles/neutron/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-05 01:10:20.400334 | orchestrator | 2026-04-05 01:10:20.400353 | orchestrator | TASK [neutron : Get container facts] ******************************************* 2026-04-05 01:10:20.400372 | orchestrator | Sunday 05 April 2026 01:05:25 +0000 (0:00:00.990) 0:00:02.452 ********** 2026-04-05 01:10:20.400390 | orchestrator | ok: [testbed-node-1] 2026-04-05 01:10:20.400481 | orchestrator | ok: [testbed-node-0] 2026-04-05 01:10:20.400502 | orchestrator | ok: [testbed-node-2] 2026-04-05 01:10:20.400521 | orchestrator | ok: [testbed-node-3] 2026-04-05 01:10:20.400539 | orchestrator | ok: [testbed-node-4] 2026-04-05 01:10:20.400559 | orchestrator | ok: [testbed-node-5] 2026-04-05 01:10:20.400579 | orchestrator | 2026-04-05 01:10:20.400598 | orchestrator | TASK [neutron : Get container volume facts] ************************************ 2026-04-05 01:10:20.400617 | orchestrator | Sunday 05 April 2026 01:05:27 +0000 (0:00:01.405) 0:00:03.858 ********** 2026-04-05 01:10:20.400634 | orchestrator | ok: [testbed-node-1] 2026-04-05 01:10:20.400648 | orchestrator | ok: [testbed-node-2] 2026-04-05 01:10:20.400663 | orchestrator | ok: [testbed-node-0] 2026-04-05 01:10:20.400676 | orchestrator | ok: [testbed-node-3] 2026-04-05 01:10:20.400689 | orchestrator | ok: [testbed-node-4] 2026-04-05 01:10:20.400701 | orchestrator | ok: [testbed-node-5] 2026-04-05 01:10:20.400714 | orchestrator | 2026-04-05 01:10:20.400725 | orchestrator | TASK [neutron : Check for ML2/OVN presence] ************************************ 2026-04-05 01:10:20.400736 | orchestrator | Sunday 05 April 2026 01:05:28 +0000 (0:00:01.126) 0:00:04.984 ********** 2026-04-05 01:10:20.400746 | orchestrator | ok: [testbed-node-0] => { 2026-04-05 01:10:20.400758 | orchestrator |  "changed": false, 2026-04-05 01:10:20.400769 | orchestrator |  "msg": "All assertions passed" 2026-04-05 01:10:20.400780 | orchestrator | } 2026-04-05 01:10:20.400799 | orchestrator | ok: [testbed-node-1] => { 2026-04-05 01:10:20.400817 | orchestrator |  "changed": false, 2026-04-05 01:10:20.400836 | orchestrator |  "msg": "All assertions passed" 2026-04-05 01:10:20.400853 | orchestrator | } 2026-04-05 01:10:20.400870 | orchestrator | ok: [testbed-node-2] => { 2026-04-05 01:10:20.400888 | orchestrator |  "changed": false, 2026-04-05 01:10:20.400907 | orchestrator |  "msg": "All assertions passed" 2026-04-05 01:10:20.400926 | orchestrator | } 2026-04-05 01:10:20.400945 | orchestrator | ok: [testbed-node-3] => { 2026-04-05 01:10:20.400965 | orchestrator |  "changed": false, 2026-04-05 01:10:20.400984 | orchestrator |  "msg": "All assertions passed" 2026-04-05 01:10:20.401002 | orchestrator | } 2026-04-05 01:10:20.401021 | orchestrator | ok: [testbed-node-4] => { 2026-04-05 01:10:20.401032 | orchestrator |  "changed": false, 2026-04-05 01:10:20.401044 | orchestrator |  "msg": "All assertions passed" 2026-04-05 01:10:20.401054 | orchestrator | } 2026-04-05 01:10:20.401065 | orchestrator | ok: [testbed-node-5] => { 2026-04-05 01:10:20.401075 | orchestrator |  "changed": false, 2026-04-05 01:10:20.401086 | orchestrator |  "msg": "All assertions passed" 2026-04-05 01:10:20.401097 | orchestrator | } 2026-04-05 01:10:20.401107 | orchestrator | 2026-04-05 01:10:20.401118 | orchestrator | TASK [neutron : Check for ML2/OVS presence] ************************************ 2026-04-05 01:10:20.401129 | orchestrator | Sunday 05 April 2026 01:05:29 +0000 (0:00:00.585) 0:00:05.570 ********** 2026-04-05 01:10:20.401139 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:10:20.401150 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:10:20.401161 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:10:20.401184 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:10:20.401194 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:10:20.401205 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:10:20.401216 | orchestrator | 2026-04-05 01:10:20.401226 | orchestrator | TASK [service-ks-register : neutron | Creating services] *********************** 2026-04-05 01:10:20.401237 | orchestrator | Sunday 05 April 2026 01:05:29 +0000 (0:00:00.728) 0:00:06.298 ********** 2026-04-05 01:10:20.401248 | orchestrator | changed: [testbed-node-0] => (item=neutron (network)) 2026-04-05 01:10:20.401259 | orchestrator | 2026-04-05 01:10:20.401269 | orchestrator | TASK [service-ks-register : neutron | Creating endpoints] ********************** 2026-04-05 01:10:20.401280 | orchestrator | Sunday 05 April 2026 01:05:34 +0000 (0:00:04.324) 0:00:10.623 ********** 2026-04-05 01:10:20.401291 | orchestrator | changed: [testbed-node-0] => (item=neutron -> https://api-int.testbed.osism.xyz:9696 -> internal) 2026-04-05 01:10:20.401303 | orchestrator | changed: [testbed-node-0] => (item=neutron -> https://api.testbed.osism.xyz:9696 -> public) 2026-04-05 01:10:20.401314 | orchestrator | 2026-04-05 01:10:20.401352 | orchestrator | TASK [service-ks-register : neutron | Creating projects] *********************** 2026-04-05 01:10:20.401364 | orchestrator | Sunday 05 April 2026 01:05:41 +0000 (0:00:07.501) 0:00:18.124 ********** 2026-04-05 01:10:20.401375 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-04-05 01:10:20.401386 | orchestrator | 2026-04-05 01:10:20.401443 | orchestrator | TASK [service-ks-register : neutron | Creating users] ************************** 2026-04-05 01:10:20.401456 | orchestrator | Sunday 05 April 2026 01:05:45 +0000 (0:00:03.676) 0:00:21.801 ********** 2026-04-05 01:10:20.401467 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service) 2026-04-05 01:10:20.401478 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-04-05 01:10:20.401489 | orchestrator | 2026-04-05 01:10:20.401499 | orchestrator | TASK [service-ks-register : neutron | Creating roles] ************************** 2026-04-05 01:10:20.401510 | orchestrator | Sunday 05 April 2026 01:05:49 +0000 (0:00:04.610) 0:00:26.411 ********** 2026-04-05 01:10:20.401520 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-04-05 01:10:20.401531 | orchestrator | 2026-04-05 01:10:20.401542 | orchestrator | TASK [service-ks-register : neutron | Granting user roles] ********************* 2026-04-05 01:10:20.401552 | orchestrator | Sunday 05 April 2026 01:05:53 +0000 (0:00:03.635) 0:00:30.047 ********** 2026-04-05 01:10:20.401563 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service -> admin) 2026-04-05 01:10:20.401573 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service -> service) 2026-04-05 01:10:20.401584 | orchestrator | 2026-04-05 01:10:20.401595 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2026-04-05 01:10:20.401605 | orchestrator | Sunday 05 April 2026 01:06:01 +0000 (0:00:08.416) 0:00:38.464 ********** 2026-04-05 01:10:20.401616 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:10:20.401626 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:10:20.401637 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:10:20.401648 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:10:20.401658 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:10:20.401668 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:10:20.401679 | orchestrator | 2026-04-05 01:10:20.401690 | orchestrator | TASK [Load and persist kernel modules] ***************************************** 2026-04-05 01:10:20.401700 | orchestrator | Sunday 05 April 2026 01:06:02 +0000 (0:00:00.573) 0:00:39.037 ********** 2026-04-05 01:10:20.401711 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:10:20.401721 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:10:20.401732 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:10:20.401742 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:10:20.401753 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:10:20.401764 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:10:20.401774 | orchestrator | 2026-04-05 01:10:20.401785 | orchestrator | TASK [neutron : Check IPv6 support] ******************************************** 2026-04-05 01:10:20.401796 | orchestrator | Sunday 05 April 2026 01:06:05 +0000 (0:00:03.389) 0:00:42.427 ********** 2026-04-05 01:10:20.401815 | orchestrator | ok: [testbed-node-1] 2026-04-05 01:10:20.401826 | orchestrator | ok: [testbed-node-2] 2026-04-05 01:10:20.401836 | orchestrator | ok: [testbed-node-0] 2026-04-05 01:10:20.401847 | orchestrator | ok: [testbed-node-3] 2026-04-05 01:10:20.401857 | orchestrator | ok: [testbed-node-4] 2026-04-05 01:10:20.401868 | orchestrator | ok: [testbed-node-5] 2026-04-05 01:10:20.401879 | orchestrator | 2026-04-05 01:10:20.401889 | orchestrator | TASK [Setting sysctl values] *************************************************** 2026-04-05 01:10:20.401900 | orchestrator | Sunday 05 April 2026 01:06:06 +0000 (0:00:01.061) 0:00:43.488 ********** 2026-04-05 01:10:20.401911 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:10:20.401921 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:10:20.401932 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:10:20.401943 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:10:20.401953 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:10:20.401964 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:10:20.401974 | orchestrator | 2026-04-05 01:10:20.401985 | orchestrator | TASK [neutron : Ensuring config directories exist] ***************************** 2026-04-05 01:10:20.401996 | orchestrator | Sunday 05 April 2026 01:06:09 +0000 (0:00:02.692) 0:00:46.181 ********** 2026-04-05 01:10:20.402010 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-04-05 01:10:20.402101 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-04-05 01:10:20.402116 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-04-05 01:10:20.402128 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-04-05 01:10:20.402148 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-04-05 01:10:20.402169 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-04-05 01:10:20.402189 | orchestrator | 2026-04-05 01:10:20.402210 | orchestrator | TASK [neutron : Check if extra ml2 plugins exists] ***************************** 2026-04-05 01:10:20.402231 | orchestrator | Sunday 05 April 2026 01:06:12 +0000 (0:00:02.839) 0:00:49.020 ********** 2026-04-05 01:10:20.402251 | orchestrator | [WARNING]: Skipped 2026-04-05 01:10:20.402269 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/neutron/plugins/' path 2026-04-05 01:10:20.402286 | orchestrator | due to this access issue: 2026-04-05 01:10:20.402307 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/neutron/plugins/' is not 2026-04-05 01:10:20.402329 | orchestrator | a directory 2026-04-05 01:10:20.402350 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-05 01:10:20.402370 | orchestrator | 2026-04-05 01:10:20.402392 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2026-04-05 01:10:20.402451 | orchestrator | Sunday 05 April 2026 01:06:13 +0000 (0:00:00.894) 0:00:49.915 ********** 2026-04-05 01:10:20.402471 | orchestrator | included: /ansible/roles/neutron/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-05 01:10:20.402484 | orchestrator | 2026-04-05 01:10:20.402495 | orchestrator | TASK [service-cert-copy : neutron | Copying over extra CA certificates] ******** 2026-04-05 01:10:20.402506 | orchestrator | Sunday 05 April 2026 01:06:14 +0000 (0:00:01.318) 0:00:51.233 ********** 2026-04-05 01:10:20.402517 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-04-05 01:10:20.402539 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-04-05 01:10:20.402550 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-04-05 01:10:20.402562 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-04-05 01:10:20.402589 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-04-05 01:10:20.402602 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-04-05 01:10:20.402619 | orchestrator | 2026-04-05 01:10:20.402631 | orchestrator | TASK [service-cert-copy : neutron | Copying over backend internal TLS certificate] *** 2026-04-05 01:10:20.402642 | orchestrator | Sunday 05 April 2026 01:06:18 +0000 (0:00:03.510) 0:00:54.744 ********** 2026-04-05 01:10:20.402653 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-04-05 01:10:20.402665 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:10:20.402676 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-04-05 01:10:20.402688 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-05 01:10:20.402699 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:10:20.402710 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:10:20.402733 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-04-05 01:10:20.402752 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:10:20.402764 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-05 01:10:20.402775 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:10:20.402786 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-05 01:10:20.402797 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:10:20.402808 | orchestrator | 2026-04-05 01:10:20.402819 | orchestrator | TASK [service-cert-copy : neutron | Copying over backend internal TLS key] ***** 2026-04-05 01:10:20.402830 | orchestrator | Sunday 05 April 2026 01:06:20 +0000 (0:00:02.030) 0:00:56.775 ********** 2026-04-05 01:10:20.402841 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-04-05 01:10:20.402852 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:10:20.402876 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-04-05 01:10:20.402901 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:10:20.402912 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-04-05 01:10:20.402924 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:10:20.402935 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-05 01:10:20.402946 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:10:20.402957 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-05 01:10:20.402968 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:10:20.402979 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-05 01:10:20.402990 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:10:20.403000 | orchestrator | 2026-04-05 01:10:20.403011 | orchestrator | TASK [neutron : Creating TLS backend PEM File] ********************************* 2026-04-05 01:10:20.403028 | orchestrator | Sunday 05 April 2026 01:06:23 +0000 (0:00:02.981) 0:00:59.756 ********** 2026-04-05 01:10:20.403039 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:10:20.403050 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:10:20.403061 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:10:20.403072 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:10:20.403082 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:10:20.403093 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:10:20.403103 | orchestrator | 2026-04-05 01:10:20.403114 | orchestrator | TASK [neutron : Check if policies shall be overwritten] ************************ 2026-04-05 01:10:20.403136 | orchestrator | Sunday 05 April 2026 01:06:25 +0000 (0:00:02.547) 0:01:02.304 ********** 2026-04-05 01:10:20.403148 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:10:20.403158 | orchestrator | 2026-04-05 01:10:20.403169 | orchestrator | TASK [neutron : Set neutron policy file] *************************************** 2026-04-05 01:10:20.403180 | orchestrator | Sunday 05 April 2026 01:06:26 +0000 (0:00:00.342) 0:01:02.647 ********** 2026-04-05 01:10:20.403191 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:10:20.403201 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:10:20.403212 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:10:20.403223 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:10:20.403234 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:10:20.403244 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:10:20.403255 | orchestrator | 2026-04-05 01:10:20.403266 | orchestrator | TASK [neutron : Copying over existing policy file] ***************************** 2026-04-05 01:10:20.403276 | orchestrator | Sunday 05 April 2026 01:06:26 +0000 (0:00:00.613) 0:01:03.260 ********** 2026-04-05 01:10:20.403287 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-04-05 01:10:20.403299 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:10:20.403310 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-04-05 01:10:20.403321 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:10:20.403332 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-04-05 01:10:20.403349 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:10:20.403372 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-05 01:10:20.403384 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:10:20.403425 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-05 01:10:20.403440 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:10:20.403451 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-05 01:10:20.403462 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:10:20.403473 | orchestrator | 2026-04-05 01:10:20.403484 | orchestrator | TASK [neutron : Copying over config.json files for services] ******************* 2026-04-05 01:10:20.403494 | orchestrator | Sunday 05 April 2026 01:06:29 +0000 (0:00:02.798) 0:01:06.058 ********** 2026-04-05 01:10:20.403505 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-04-05 01:10:20.403524 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-04-05 01:10:20.403549 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-04-05 01:10:20.403562 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-04-05 01:10:20.403573 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-04-05 01:10:20.403585 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-04-05 01:10:20.403602 | orchestrator | 2026-04-05 01:10:20.403613 | orchestrator | TASK [neutron : Copying over neutron.conf] ************************************* 2026-04-05 01:10:20.403624 | orchestrator | Sunday 05 April 2026 01:06:32 +0000 (0:00:03.316) 0:01:09.375 ********** 2026-04-05 01:10:20.403635 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-04-05 01:10:20.403658 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-04-05 01:10:20.403670 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-04-05 01:10:20.403681 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-04-05 01:10:20.403692 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-04-05 01:10:20.403710 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-04-05 01:10:20.403721 | orchestrator | 2026-04-05 01:10:20.403732 | orchestrator | TASK [neutron : Copying over neutron_vpnaas.conf] ****************************** 2026-04-05 01:10:20.403743 | orchestrator | Sunday 05 April 2026 01:06:40 +0000 (0:00:07.234) 0:01:16.610 ********** 2026-04-05 01:10:20.403767 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-04-05 01:10:20.403779 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:10:20.403790 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-05 01:10:20.403801 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:10:20.403813 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-04-05 01:10:20.403830 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:10:20.403842 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-04-05 01:10:20.403852 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:10:20.403864 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-05 01:10:20.403875 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:10:20.403904 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-05 01:10:20.403933 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:10:20.403955 | orchestrator | 2026-04-05 01:10:20.403975 | orchestrator | TASK [neutron : Copying over ssh key] ****************************************** 2026-04-05 01:10:20.403994 | orchestrator | Sunday 05 April 2026 01:06:43 +0000 (0:00:02.991) 0:01:19.601 ********** 2026-04-05 01:10:20.404013 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:10:20.404030 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:10:20.404051 | orchestrator | changed: [testbed-node-1] 2026-04-05 01:10:20.404072 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:10:20.404093 | orchestrator | changed: [testbed-node-0] 2026-04-05 01:10:20.404115 | orchestrator | changed: [testbed-node-2] 2026-04-05 01:10:20.404134 | orchestrator | 2026-04-05 01:10:20.404154 | orchestrator | TASK [neutron : Copying over ml2_conf.ini] ************************************* 2026-04-05 01:10:20.404174 | orchestrator | Sunday 05 April 2026 01:06:47 +0000 (0:00:04.575) 0:01:24.176 ********** 2026-04-05 01:10:20.404196 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-05 01:10:20.404228 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:10:20.404250 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-05 01:10:20.404272 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:10:20.404294 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-05 01:10:20.404315 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:10:20.404359 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-04-05 01:10:20.404382 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-04-05 01:10:20.404450 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-04-05 01:10:20.404474 | orchestrator | 2026-04-05 01:10:20.404495 | orchestrator | TASK [neutron : Copying over linuxbridge_agent.ini] **************************** 2026-04-05 01:10:20.404516 | orchestrator | Sunday 05 April 2026 01:06:52 +0000 (0:00:04.407) 0:01:28.584 ********** 2026-04-05 01:10:20.404537 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:10:20.404558 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:10:20.404578 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:10:20.404599 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:10:20.404619 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:10:20.404639 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:10:20.404661 | orchestrator | 2026-04-05 01:10:20.404682 | orchestrator | TASK [neutron : Copying over openvswitch_agent.ini] **************************** 2026-04-05 01:10:20.404703 | orchestrator | Sunday 05 April 2026 01:06:54 +0000 (0:00:02.312) 0:01:30.897 ********** 2026-04-05 01:10:20.404724 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:10:20.404744 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:10:20.404764 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:10:20.404784 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:10:20.404805 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:10:20.404826 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:10:20.404847 | orchestrator | 2026-04-05 01:10:20.404868 | orchestrator | TASK [neutron : Copying over sriov_agent.ini] ********************************** 2026-04-05 01:10:20.404889 | orchestrator | Sunday 05 April 2026 01:06:58 +0000 (0:00:03.920) 0:01:34.818 ********** 2026-04-05 01:10:20.404910 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:10:20.404931 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:10:20.404952 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:10:20.404972 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:10:20.404992 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:10:20.405012 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:10:20.405034 | orchestrator | 2026-04-05 01:10:20.405056 | orchestrator | TASK [neutron : Copying over mlnx_agent.ini] *********************************** 2026-04-05 01:10:20.405077 | orchestrator | Sunday 05 April 2026 01:07:01 +0000 (0:00:03.100) 0:01:37.918 ********** 2026-04-05 01:10:20.405096 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:10:20.405118 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:10:20.405139 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:10:20.405157 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:10:20.405176 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:10:20.405194 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:10:20.405214 | orchestrator | 2026-04-05 01:10:20.405235 | orchestrator | TASK [neutron : Copying over eswitchd.conf] ************************************ 2026-04-05 01:10:20.405256 | orchestrator | Sunday 05 April 2026 01:07:03 +0000 (0:00:02.103) 0:01:40.021 ********** 2026-04-05 01:10:20.405276 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:10:20.405297 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:10:20.405329 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:10:20.405357 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:10:20.405391 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:10:20.405439 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:10:20.405459 | orchestrator | 2026-04-05 01:10:20.405478 | orchestrator | TASK [neutron : Copying over dhcp_agent.ini] *********************************** 2026-04-05 01:10:20.405497 | orchestrator | Sunday 05 April 2026 01:07:06 +0000 (0:00:02.608) 0:01:42.630 ********** 2026-04-05 01:10:20.405514 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:10:20.405532 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:10:20.405550 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:10:20.405569 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:10:20.405588 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:10:20.405607 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:10:20.405622 | orchestrator | 2026-04-05 01:10:20.405633 | orchestrator | TASK [neutron : Copying over dnsmasq.conf] ************************************* 2026-04-05 01:10:20.405644 | orchestrator | Sunday 05 April 2026 01:07:09 +0000 (0:00:02.953) 0:01:45.583 ********** 2026-04-05 01:10:20.405655 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-04-05 01:10:20.405666 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:10:20.405676 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-04-05 01:10:20.405687 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:10:20.405698 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-04-05 01:10:20.405708 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:10:20.405719 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-04-05 01:10:20.405730 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:10:20.405740 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-04-05 01:10:20.405751 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:10:20.405761 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-04-05 01:10:20.405772 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:10:20.405783 | orchestrator | 2026-04-05 01:10:20.405794 | orchestrator | TASK [neutron : Copying over l3_agent.ini] ************************************* 2026-04-05 01:10:20.405804 | orchestrator | Sunday 05 April 2026 01:07:12 +0000 (0:00:03.150) 0:01:48.734 ********** 2026-04-05 01:10:20.405816 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-04-05 01:10:20.405828 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:10:20.405839 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-04-05 01:10:20.405860 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:10:20.405887 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-04-05 01:10:20.405899 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:10:20.405910 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-05 01:10:20.405921 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:10:20.406091 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-05 01:10:20.406103 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:10:20.406114 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-05 01:10:20.406125 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:10:20.406136 | orchestrator | 2026-04-05 01:10:20.406147 | orchestrator | TASK [neutron : Copying over fwaas_driver.ini] ********************************* 2026-04-05 01:10:20.406167 | orchestrator | Sunday 05 April 2026 01:07:15 +0000 (0:00:03.717) 0:01:52.452 ********** 2026-04-05 01:10:20.406178 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-04-05 01:10:20.406190 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:10:20.406226 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-05 01:10:20.406255 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:10:20.406278 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-04-05 01:10:20.406297 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:10:20.406316 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-04-05 01:10:20.406335 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:10:20.406357 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-05 01:10:20.406543 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:10:20.406609 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-05 01:10:20.406624 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:10:20.406636 | orchestrator | 2026-04-05 01:10:20.406645 | orchestrator | TASK [neutron : Copying over metadata_agent.ini] ******************************* 2026-04-05 01:10:20.406653 | orchestrator | Sunday 05 April 2026 01:07:18 +0000 (0:00:02.887) 0:01:55.340 ********** 2026-04-05 01:10:20.406661 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:10:20.406690 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:10:20.406705 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:10:20.406717 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:10:20.406729 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:10:20.406737 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:10:20.406745 | orchestrator | 2026-04-05 01:10:20.406753 | orchestrator | TASK [neutron : Copying over neutron_ovn_metadata_agent.ini] ******************* 2026-04-05 01:10:20.406761 | orchestrator | Sunday 05 April 2026 01:07:22 +0000 (0:00:03.356) 0:01:58.696 ********** 2026-04-05 01:10:20.406769 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:10:20.406776 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:10:20.406784 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:10:20.406792 | orchestrator | changed: [testbed-node-3] 2026-04-05 01:10:20.406799 | orchestrator | changed: [testbed-node-4] 2026-04-05 01:10:20.406807 | orchestrator | changed: [testbed-node-5] 2026-04-05 01:10:20.406815 | orchestrator | 2026-04-05 01:10:20.406823 | orchestrator | TASK [neutron : Copying over metering_agent.ini] ******************************* 2026-04-05 01:10:20.406830 | orchestrator | Sunday 05 April 2026 01:07:26 +0000 (0:00:03.882) 0:02:02.578 ********** 2026-04-05 01:10:20.406838 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:10:20.406846 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:10:20.406853 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:10:20.406861 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:10:20.406869 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:10:20.406877 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:10:20.406884 | orchestrator | 2026-04-05 01:10:20.406892 | orchestrator | TASK [neutron : Copying over ironic_neutron_agent.ini] ************************* 2026-04-05 01:10:20.406900 | orchestrator | Sunday 05 April 2026 01:07:28 +0000 (0:00:02.534) 0:02:05.113 ********** 2026-04-05 01:10:20.406908 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:10:20.406915 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:10:20.406923 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:10:20.406931 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:10:20.406939 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:10:20.406958 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:10:20.406972 | orchestrator | 2026-04-05 01:10:20.406985 | orchestrator | TASK [neutron : Copying over bgp_dragent.ini] ********************************** 2026-04-05 01:10:20.407000 | orchestrator | Sunday 05 April 2026 01:07:31 +0000 (0:00:03.338) 0:02:08.452 ********** 2026-04-05 01:10:20.407011 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:10:20.407019 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:10:20.407027 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:10:20.407035 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:10:20.407043 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:10:20.407050 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:10:20.407058 | orchestrator | 2026-04-05 01:10:20.407066 | orchestrator | TASK [neutron : Copying over ovn_agent.ini] ************************************ 2026-04-05 01:10:20.407074 | orchestrator | Sunday 05 April 2026 01:07:35 +0000 (0:00:03.776) 0:02:12.229 ********** 2026-04-05 01:10:20.407082 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:10:20.407089 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:10:20.407097 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:10:20.407105 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:10:20.407137 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:10:20.407145 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:10:20.407153 | orchestrator | 2026-04-05 01:10:20.407161 | orchestrator | TASK [neutron : Copying over nsx.ini] ****************************************** 2026-04-05 01:10:20.407169 | orchestrator | Sunday 05 April 2026 01:07:38 +0000 (0:00:03.238) 0:02:15.467 ********** 2026-04-05 01:10:20.407177 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:10:20.407184 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:10:20.407192 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:10:20.407200 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:10:20.407208 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:10:20.407215 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:10:20.407223 | orchestrator | 2026-04-05 01:10:20.407231 | orchestrator | TASK [neutron : Copy neutron-l3-agent-wrapper script] ************************** 2026-04-05 01:10:20.407239 | orchestrator | Sunday 05 April 2026 01:07:41 +0000 (0:00:02.829) 0:02:18.296 ********** 2026-04-05 01:10:20.407247 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:10:20.407255 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:10:20.407262 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:10:20.407270 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:10:20.407278 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:10:20.407286 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:10:20.407294 | orchestrator | 2026-04-05 01:10:20.407302 | orchestrator | TASK [neutron : Copying over extra ml2 plugins] ******************************** 2026-04-05 01:10:20.407309 | orchestrator | Sunday 05 April 2026 01:07:44 +0000 (0:00:02.515) 0:02:20.812 ********** 2026-04-05 01:10:20.407317 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:10:20.407325 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:10:20.407333 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:10:20.407341 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:10:20.407350 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:10:20.407357 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:10:20.407365 | orchestrator | 2026-04-05 01:10:20.407373 | orchestrator | TASK [neutron : Copying over neutron-tls-proxy.cfg] **************************** 2026-04-05 01:10:20.407381 | orchestrator | Sunday 05 April 2026 01:07:48 +0000 (0:00:04.291) 0:02:25.103 ********** 2026-04-05 01:10:20.407389 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-04-05 01:10:20.407421 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:10:20.407434 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-04-05 01:10:20.407442 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:10:20.407450 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-04-05 01:10:20.407465 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:10:20.407473 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-04-05 01:10:20.407481 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:10:20.407500 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-04-05 01:10:20.407509 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:10:20.407516 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-04-05 01:10:20.407524 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:10:20.407532 | orchestrator | 2026-04-05 01:10:20.407540 | orchestrator | TASK [neutron : Copying over neutron_taas.conf] ******************************** 2026-04-05 01:10:20.407548 | orchestrator | Sunday 05 April 2026 01:07:50 +0000 (0:00:02.377) 0:02:27.481 ********** 2026-04-05 01:10:20.407556 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-04-05 01:10:20.407565 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:10:20.407573 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-04-05 01:10:20.407581 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:10:20.407589 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-04-05 01:10:20.407597 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:10:20.407606 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-05 01:10:20.407619 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:10:20.407636 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-05 01:10:20.407645 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:10:20.407664 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-05 01:10:20.407681 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:10:20.407690 | orchestrator | 2026-04-05 01:10:20.407698 | orchestrator | TASK [neutron : Check neutron containers] ************************************** 2026-04-05 01:10:20.407706 | orchestrator | Sunday 05 April 2026 01:07:53 +0000 (0:00:02.519) 0:02:30.001 ********** 2026-04-05 01:10:20.407715 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-04-05 01:10:20.407725 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-04-05 01:10:20.407739 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-04-05 01:10:20.407761 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-04-05 01:10:20.407771 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-04-05 01:10:20.407779 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-04-05 01:10:20.407787 | orchestrator | 2026-04-05 01:10:20.407796 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2026-04-05 01:10:20.407804 | orchestrator | Sunday 05 April 2026 01:07:56 +0000 (0:00:02.730) 0:02:32.732 ********** 2026-04-05 01:10:20.407812 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:10:20.407820 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:10:20.407827 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:10:20.407835 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:10:20.407848 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:10:20.407856 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:10:20.407863 | orchestrator | 2026-04-05 01:10:20.407871 | orchestrator | TASK [neutron : Creating Neutron database] ************************************* 2026-04-05 01:10:20.407879 | orchestrator | Sunday 05 April 2026 01:07:56 +0000 (0:00:00.639) 0:02:33.371 ********** 2026-04-05 01:10:20.407887 | orchestrator | changed: [testbed-node-0] 2026-04-05 01:10:20.407895 | orchestrator | 2026-04-05 01:10:20.407903 | orchestrator | TASK [neutron : Creating Neutron database user and setting permissions] ******** 2026-04-05 01:10:20.407911 | orchestrator | Sunday 05 April 2026 01:07:59 +0000 (0:00:02.508) 0:02:35.880 ********** 2026-04-05 01:10:20.407918 | orchestrator | changed: [testbed-node-0] 2026-04-05 01:10:20.407926 | orchestrator | 2026-04-05 01:10:20.407934 | orchestrator | TASK [neutron : Running Neutron bootstrap container] *************************** 2026-04-05 01:10:20.407942 | orchestrator | Sunday 05 April 2026 01:08:01 +0000 (0:00:02.615) 0:02:38.495 ********** 2026-04-05 01:10:20.407950 | orchestrator | changed: [testbed-node-0] 2026-04-05 01:10:20.407957 | orchestrator | 2026-04-05 01:10:20.407965 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-04-05 01:10:20.407973 | orchestrator | Sunday 05 April 2026 01:08:46 +0000 (0:00:44.431) 0:03:22.926 ********** 2026-04-05 01:10:20.407981 | orchestrator | 2026-04-05 01:10:20.407989 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-04-05 01:10:20.407997 | orchestrator | Sunday 05 April 2026 01:08:46 +0000 (0:00:00.273) 0:03:23.200 ********** 2026-04-05 01:10:20.408005 | orchestrator | 2026-04-05 01:10:20.408012 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-04-05 01:10:20.408020 | orchestrator | Sunday 05 April 2026 01:08:46 +0000 (0:00:00.183) 0:03:23.384 ********** 2026-04-05 01:10:20.408028 | orchestrator | 2026-04-05 01:10:20.408036 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-04-05 01:10:20.408044 | orchestrator | Sunday 05 April 2026 01:08:46 +0000 (0:00:00.098) 0:03:23.482 ********** 2026-04-05 01:10:20.408052 | orchestrator | 2026-04-05 01:10:20.408068 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-04-05 01:10:20.408077 | orchestrator | Sunday 05 April 2026 01:08:47 +0000 (0:00:00.068) 0:03:23.550 ********** 2026-04-05 01:10:20.408085 | orchestrator | 2026-04-05 01:10:20.408093 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-04-05 01:10:20.408100 | orchestrator | Sunday 05 April 2026 01:08:47 +0000 (0:00:00.079) 0:03:23.630 ********** 2026-04-05 01:10:20.408108 | orchestrator | 2026-04-05 01:10:20.408116 | orchestrator | RUNNING HANDLER [neutron : Restart neutron-server container] ******************* 2026-04-05 01:10:20.408124 | orchestrator | Sunday 05 April 2026 01:08:47 +0000 (0:00:00.089) 0:03:23.719 ********** 2026-04-05 01:10:20.408132 | orchestrator | changed: [testbed-node-0] 2026-04-05 01:10:20.408139 | orchestrator | changed: [testbed-node-1] 2026-04-05 01:10:20.408147 | orchestrator | changed: [testbed-node-2] 2026-04-05 01:10:20.408155 | orchestrator | 2026-04-05 01:10:20.408163 | orchestrator | RUNNING HANDLER [neutron : Restart neutron-ovn-metadata-agent container] ******* 2026-04-05 01:10:20.408171 | orchestrator | Sunday 05 April 2026 01:09:22 +0000 (0:00:35.610) 0:03:59.329 ********** 2026-04-05 01:10:20.408178 | orchestrator | changed: [testbed-node-3] 2026-04-05 01:10:20.408186 | orchestrator | changed: [testbed-node-5] 2026-04-05 01:10:20.408194 | orchestrator | changed: [testbed-node-4] 2026-04-05 01:10:20.408202 | orchestrator | 2026-04-05 01:10:20.408210 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-05 01:10:20.408219 | orchestrator | testbed-node-0 : ok=26  changed=15  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-04-05 01:10:20.408228 | orchestrator | testbed-node-1 : ok=16  changed=8  unreachable=0 failed=0 skipped=31  rescued=0 ignored=0 2026-04-05 01:10:20.408235 | orchestrator | testbed-node-2 : ok=16  changed=8  unreachable=0 failed=0 skipped=31  rescued=0 ignored=0 2026-04-05 01:10:20.408248 | orchestrator | testbed-node-3 : ok=15  changed=7  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-04-05 01:10:20.408256 | orchestrator | testbed-node-4 : ok=15  changed=7  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-04-05 01:10:20.408264 | orchestrator | testbed-node-5 : ok=15  changed=7  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-04-05 01:10:20.408272 | orchestrator | 2026-04-05 01:10:20.408280 | orchestrator | 2026-04-05 01:10:20.408288 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-05 01:10:20.408296 | orchestrator | Sunday 05 April 2026 01:10:19 +0000 (0:00:56.272) 0:04:55.601 ********** 2026-04-05 01:10:20.408304 | orchestrator | =============================================================================== 2026-04-05 01:10:20.408312 | orchestrator | neutron : Restart neutron-ovn-metadata-agent container ----------------- 56.27s 2026-04-05 01:10:20.408319 | orchestrator | neutron : Running Neutron bootstrap container -------------------------- 44.43s 2026-04-05 01:10:20.408327 | orchestrator | neutron : Restart neutron-server container ----------------------------- 35.61s 2026-04-05 01:10:20.408335 | orchestrator | service-ks-register : neutron | Granting user roles --------------------- 8.42s 2026-04-05 01:10:20.408343 | orchestrator | service-ks-register : neutron | Creating endpoints ---------------------- 7.50s 2026-04-05 01:10:20.408351 | orchestrator | neutron : Copying over neutron.conf ------------------------------------- 7.23s 2026-04-05 01:10:20.408359 | orchestrator | service-ks-register : neutron | Creating users -------------------------- 4.61s 2026-04-05 01:10:20.408366 | orchestrator | neutron : Copying over ssh key ------------------------------------------ 4.58s 2026-04-05 01:10:20.408374 | orchestrator | neutron : Copying over ml2_conf.ini ------------------------------------- 4.41s 2026-04-05 01:10:20.408382 | orchestrator | service-ks-register : neutron | Creating services ----------------------- 4.32s 2026-04-05 01:10:20.408390 | orchestrator | neutron : Copying over extra ml2 plugins -------------------------------- 4.29s 2026-04-05 01:10:20.408423 | orchestrator | neutron : Copying over openvswitch_agent.ini ---------------------------- 3.92s 2026-04-05 01:10:20.408438 | orchestrator | neutron : Copying over neutron_ovn_metadata_agent.ini ------------------- 3.88s 2026-04-05 01:10:20.408450 | orchestrator | neutron : Copying over bgp_dragent.ini ---------------------------------- 3.78s 2026-04-05 01:10:20.408457 | orchestrator | neutron : Copying over l3_agent.ini ------------------------------------- 3.72s 2026-04-05 01:10:20.408465 | orchestrator | service-ks-register : neutron | Creating projects ----------------------- 3.68s 2026-04-05 01:10:20.408473 | orchestrator | service-ks-register : neutron | Creating roles -------------------------- 3.64s 2026-04-05 01:10:20.408481 | orchestrator | service-cert-copy : neutron | Copying over extra CA certificates -------- 3.51s 2026-04-05 01:10:20.408489 | orchestrator | Load and persist kernel modules ----------------------------------------- 3.39s 2026-04-05 01:10:20.408497 | orchestrator | neutron : Copying over metadata_agent.ini ------------------------------- 3.36s 2026-04-05 01:10:20.408504 | orchestrator | 2026-04-05 01:10:20 | INFO  | Task 9e88a4a6-3dbc-4a0f-b508-16a3cd81a261 is in state STARTED 2026-04-05 01:10:20.408513 | orchestrator | 2026-04-05 01:10:20 | INFO  | Task 03750040-aa24-4e1c-9ab8-68a6c056ef7d is in state STARTED 2026-04-05 01:10:20.408521 | orchestrator | 2026-04-05 01:10:20 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:10:23.430275 | orchestrator | 2026-04-05 01:10:23 | INFO  | Task b5e2ef28-6577-4f62-94bc-3cb1b5248189 is in state STARTED 2026-04-05 01:10:23.430632 | orchestrator | 2026-04-05 01:10:23 | INFO  | Task 9e88a4a6-3dbc-4a0f-b508-16a3cd81a261 is in state STARTED 2026-04-05 01:10:23.431223 | orchestrator | 2026-04-05 01:10:23 | INFO  | Task 7e9f5de7-e495-4e99-84cd-3e1d33040039 is in state STARTED 2026-04-05 01:10:23.432541 | orchestrator | 2026-04-05 01:10:23 | INFO  | Task 03750040-aa24-4e1c-9ab8-68a6c056ef7d is in state STARTED 2026-04-05 01:10:23.432583 | orchestrator | 2026-04-05 01:10:23 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:10:26.465241 | orchestrator | 2026-04-05 01:10:26 | INFO  | Task b5e2ef28-6577-4f62-94bc-3cb1b5248189 is in state STARTED 2026-04-05 01:10:26.468378 | orchestrator | 2026-04-05 01:10:26 | INFO  | Task 9e88a4a6-3dbc-4a0f-b508-16a3cd81a261 is in state STARTED 2026-04-05 01:10:26.474647 | orchestrator | 2026-04-05 01:10:26 | INFO  | Task 7e9f5de7-e495-4e99-84cd-3e1d33040039 is in state STARTED 2026-04-05 01:10:26.478084 | orchestrator | 2026-04-05 01:10:26 | INFO  | Task 03750040-aa24-4e1c-9ab8-68a6c056ef7d is in state STARTED 2026-04-05 01:10:26.478191 | orchestrator | 2026-04-05 01:10:26 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:10:29.539819 | orchestrator | 2026-04-05 01:10:29 | INFO  | Task b5e2ef28-6577-4f62-94bc-3cb1b5248189 is in state STARTED 2026-04-05 01:10:29.539900 | orchestrator | 2026-04-05 01:10:29 | INFO  | Task 9e88a4a6-3dbc-4a0f-b508-16a3cd81a261 is in state STARTED 2026-04-05 01:10:29.539908 | orchestrator | 2026-04-05 01:10:29 | INFO  | Task 7e9f5de7-e495-4e99-84cd-3e1d33040039 is in state STARTED 2026-04-05 01:10:29.540306 | orchestrator | 2026-04-05 01:10:29 | INFO  | Task 134735f7-9de9-4f32-98ff-f4a374d90462 is in state STARTED 2026-04-05 01:10:29.542685 | orchestrator | 2026-04-05 01:10:29 | INFO  | Task 03750040-aa24-4e1c-9ab8-68a6c056ef7d is in state SUCCESS 2026-04-05 01:10:29.542788 | orchestrator | 2026-04-05 01:10:29 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:10:29.543843 | orchestrator | 2026-04-05 01:10:29.543875 | orchestrator | 2026-04-05 01:10:29.543886 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-05 01:10:29.543896 | orchestrator | 2026-04-05 01:10:29.543904 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-05 01:10:29.543912 | orchestrator | Sunday 05 April 2026 01:09:10 +0000 (0:00:01.040) 0:00:01.040 ********** 2026-04-05 01:10:29.543920 | orchestrator | ok: [testbed-node-0] 2026-04-05 01:10:29.543929 | orchestrator | ok: [testbed-node-1] 2026-04-05 01:10:29.543937 | orchestrator | ok: [testbed-node-2] 2026-04-05 01:10:29.543945 | orchestrator | 2026-04-05 01:10:29.543953 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-05 01:10:29.543962 | orchestrator | Sunday 05 April 2026 01:09:10 +0000 (0:00:00.727) 0:00:01.768 ********** 2026-04-05 01:10:29.543971 | orchestrator | ok: [testbed-node-0] => (item=enable_placement_True) 2026-04-05 01:10:29.543979 | orchestrator | ok: [testbed-node-1] => (item=enable_placement_True) 2026-04-05 01:10:29.543987 | orchestrator | ok: [testbed-node-2] => (item=enable_placement_True) 2026-04-05 01:10:29.543994 | orchestrator | 2026-04-05 01:10:29.544002 | orchestrator | PLAY [Apply role placement] **************************************************** 2026-04-05 01:10:29.544010 | orchestrator | 2026-04-05 01:10:29.544018 | orchestrator | TASK [placement : include_tasks] *********************************************** 2026-04-05 01:10:29.544026 | orchestrator | Sunday 05 April 2026 01:09:11 +0000 (0:00:00.737) 0:00:02.505 ********** 2026-04-05 01:10:29.544033 | orchestrator | included: /ansible/roles/placement/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 01:10:29.544043 | orchestrator | 2026-04-05 01:10:29.544051 | orchestrator | TASK [service-ks-register : placement | Creating services] ********************* 2026-04-05 01:10:29.544059 | orchestrator | Sunday 05 April 2026 01:09:12 +0000 (0:00:00.917) 0:00:03.422 ********** 2026-04-05 01:10:29.544067 | orchestrator | changed: [testbed-node-0] => (item=placement (placement)) 2026-04-05 01:10:29.544075 | orchestrator | 2026-04-05 01:10:29.544082 | orchestrator | TASK [service-ks-register : placement | Creating endpoints] ******************** 2026-04-05 01:10:29.544090 | orchestrator | Sunday 05 April 2026 01:09:17 +0000 (0:00:04.599) 0:00:08.022 ********** 2026-04-05 01:10:29.544121 | orchestrator | changed: [testbed-node-0] => (item=placement -> https://api-int.testbed.osism.xyz:8780 -> internal) 2026-04-05 01:10:29.544129 | orchestrator | changed: [testbed-node-0] => (item=placement -> https://api.testbed.osism.xyz:8780 -> public) 2026-04-05 01:10:29.544137 | orchestrator | 2026-04-05 01:10:29.544145 | orchestrator | TASK [service-ks-register : placement | Creating projects] ********************* 2026-04-05 01:10:29.544153 | orchestrator | Sunday 05 April 2026 01:09:25 +0000 (0:00:08.088) 0:00:16.110 ********** 2026-04-05 01:10:29.544161 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-04-05 01:10:29.544169 | orchestrator | 2026-04-05 01:10:29.544177 | orchestrator | TASK [service-ks-register : placement | Creating users] ************************ 2026-04-05 01:10:29.544184 | orchestrator | Sunday 05 April 2026 01:09:28 +0000 (0:00:03.800) 0:00:19.911 ********** 2026-04-05 01:10:29.544192 | orchestrator | changed: [testbed-node-0] => (item=placement -> service) 2026-04-05 01:10:29.544200 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-04-05 01:10:29.544207 | orchestrator | 2026-04-05 01:10:29.544215 | orchestrator | TASK [service-ks-register : placement | Creating roles] ************************ 2026-04-05 01:10:29.544235 | orchestrator | Sunday 05 April 2026 01:09:33 +0000 (0:00:04.412) 0:00:24.323 ********** 2026-04-05 01:10:29.544247 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-04-05 01:10:29.544261 | orchestrator | 2026-04-05 01:10:29.544275 | orchestrator | TASK [service-ks-register : placement | Granting user roles] ******************* 2026-04-05 01:10:29.544289 | orchestrator | Sunday 05 April 2026 01:09:37 +0000 (0:00:03.845) 0:00:28.169 ********** 2026-04-05 01:10:29.544310 | orchestrator | changed: [testbed-node-0] => (item=placement -> service -> admin) 2026-04-05 01:10:29.544325 | orchestrator | 2026-04-05 01:10:29.544337 | orchestrator | TASK [placement : include_tasks] *********************************************** 2026-04-05 01:10:29.544352 | orchestrator | Sunday 05 April 2026 01:09:41 +0000 (0:00:04.115) 0:00:32.285 ********** 2026-04-05 01:10:29.544364 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:10:29.544377 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:10:29.544391 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:10:29.544442 | orchestrator | 2026-04-05 01:10:29.544457 | orchestrator | TASK [placement : Ensuring config directories exist] *************************** 2026-04-05 01:10:29.544470 | orchestrator | Sunday 05 April 2026 01:09:41 +0000 (0:00:00.292) 0:00:32.577 ********** 2026-04-05 01:10:29.544488 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-04-05 01:10:29.544522 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-04-05 01:10:29.544551 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-04-05 01:10:29.544565 | orchestrator | 2026-04-05 01:10:29.544579 | orchestrator | TASK [placement : Check if policies shall be overwritten] ********************** 2026-04-05 01:10:29.544594 | orchestrator | Sunday 05 April 2026 01:09:43 +0000 (0:00:02.347) 0:00:34.924 ********** 2026-04-05 01:10:29.544607 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:10:29.544621 | orchestrator | 2026-04-05 01:10:29.544635 | orchestrator | TASK [placement : Set placement policy file] *********************************** 2026-04-05 01:10:29.544649 | orchestrator | Sunday 05 April 2026 01:09:44 +0000 (0:00:00.206) 0:00:35.131 ********** 2026-04-05 01:10:29.544661 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:10:29.544675 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:10:29.544689 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:10:29.544703 | orchestrator | 2026-04-05 01:10:29.544716 | orchestrator | TASK [placement : include_tasks] *********************************************** 2026-04-05 01:10:29.544737 | orchestrator | Sunday 05 April 2026 01:09:44 +0000 (0:00:00.440) 0:00:35.572 ********** 2026-04-05 01:10:29.544751 | orchestrator | included: /ansible/roles/placement/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 01:10:29.544764 | orchestrator | 2026-04-05 01:10:29.544778 | orchestrator | TASK [service-cert-copy : placement | Copying over extra CA certificates] ****** 2026-04-05 01:10:29.544790 | orchestrator | Sunday 05 April 2026 01:09:45 +0000 (0:00:01.051) 0:00:36.623 ********** 2026-04-05 01:10:29.544805 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-04-05 01:10:29.544834 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-04-05 01:10:29.544862 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-04-05 01:10:29.544878 | orchestrator | 2026-04-05 01:10:29.544893 | orchestrator | TASK [service-cert-copy : placement | Copying over backend internal TLS certificate] *** 2026-04-05 01:10:29.544908 | orchestrator | Sunday 05 April 2026 01:09:47 +0000 (0:00:01.798) 0:00:38.422 ********** 2026-04-05 01:10:29.544923 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-04-05 01:10:29.544945 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:10:29.544960 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-04-05 01:10:29.544975 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:10:29.544999 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-04-05 01:10:29.545025 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:10:29.545040 | orchestrator | 2026-04-05 01:10:29.545055 | orchestrator | TASK [service-cert-copy : placement | Copying over backend internal TLS key] *** 2026-04-05 01:10:29.545070 | orchestrator | Sunday 05 April 2026 01:09:47 +0000 (0:00:00.493) 0:00:38.916 ********** 2026-04-05 01:10:29.545085 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-04-05 01:10:29.545101 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:10:29.545115 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-04-05 01:10:29.545130 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:10:29.545151 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-04-05 01:10:29.545167 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:10:29.545183 | orchestrator | 2026-04-05 01:10:29.545197 | orchestrator | TASK [placement : Copying over config.json files for services] ***************** 2026-04-05 01:10:29.545267 | orchestrator | Sunday 05 April 2026 01:09:48 +0000 (0:00:00.720) 0:00:39.636 ********** 2026-04-05 01:10:29.545291 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-04-05 01:10:29.545317 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-04-05 01:10:29.545332 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-04-05 01:10:29.545347 | orchestrator | 2026-04-05 01:10:29.545363 | orchestrator | TASK [placement : Copying over placement.conf] ********************************* 2026-04-05 01:10:29.545379 | orchestrator | Sunday 05 April 2026 01:09:50 +0000 (0:00:01.726) 0:00:41.363 ********** 2026-04-05 01:10:29.545436 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-04-05 01:10:29.545457 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-04-05 01:10:29.545494 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-04-05 01:10:29.545510 | orchestrator | 2026-04-05 01:10:29.545526 | orchestrator | TASK [placement : Copying over placement-api wsgi configuration] *************** 2026-04-05 01:10:29.545541 | orchestrator | Sunday 05 April 2026 01:09:52 +0000 (0:00:02.232) 0:00:43.596 ********** 2026-04-05 01:10:29.545556 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2026-04-05 01:10:29.545574 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2026-04-05 01:10:29.545591 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2026-04-05 01:10:29.545608 | orchestrator | 2026-04-05 01:10:29.545623 | orchestrator | TASK [placement : Copying over migrate-db.rc.j2 configuration] ***************** 2026-04-05 01:10:29.545638 | orchestrator | Sunday 05 April 2026 01:09:53 +0000 (0:00:01.356) 0:00:44.952 ********** 2026-04-05 01:10:29.545654 | orchestrator | changed: [testbed-node-0] 2026-04-05 01:10:29.545670 | orchestrator | changed: [testbed-node-1] 2026-04-05 01:10:29.545686 | orchestrator | changed: [testbed-node-2] 2026-04-05 01:10:29.545701 | orchestrator | 2026-04-05 01:10:29.545717 | orchestrator | TASK [placement : Copying over existing policy file] *************************** 2026-04-05 01:10:29.545733 | orchestrator | Sunday 05 April 2026 01:09:55 +0000 (0:00:01.449) 0:00:46.402 ********** 2026-04-05 01:10:29.545756 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-04-05 01:10:29.545775 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:10:29.545791 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-04-05 01:10:29.545818 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:10:29.545844 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-04-05 01:10:29.545860 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:10:29.545876 | orchestrator | 2026-04-05 01:10:29.545892 | orchestrator | TASK [placement : Check placement containers] ********************************** 2026-04-05 01:10:29.545907 | orchestrator | Sunday 05 April 2026 01:09:56 +0000 (0:00:01.165) 0:00:47.567 ********** 2026-04-05 01:10:29.545924 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-04-05 01:10:29.545940 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-04-05 01:10:29.545964 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-04-05 01:10:29.546000 | orchestrator | 2026-04-05 01:10:29.546070 | orchestrator | TASK [placement : Creating placement databases] ******************************** 2026-04-05 01:10:29.546091 | orchestrator | Sunday 05 April 2026 01:09:57 +0000 (0:00:01.236) 0:00:48.804 ********** 2026-04-05 01:10:29.546107 | orchestrator | changed: [testbed-node-0] 2026-04-05 01:10:29.546122 | orchestrator | 2026-04-05 01:10:29.546138 | orchestrator | TASK [placement : Creating placement databases user and setting permissions] *** 2026-04-05 01:10:29.546154 | orchestrator | Sunday 05 April 2026 01:10:00 +0000 (0:00:02.658) 0:00:51.462 ********** 2026-04-05 01:10:29.546170 | orchestrator | changed: [testbed-node-0] 2026-04-05 01:10:29.546185 | orchestrator | 2026-04-05 01:10:29.546200 | orchestrator | TASK [placement : Running placement bootstrap container] *********************** 2026-04-05 01:10:29.546217 | orchestrator | Sunday 05 April 2026 01:10:02 +0000 (0:00:02.506) 0:00:53.969 ********** 2026-04-05 01:10:29.546232 | orchestrator | changed: [testbed-node-0] 2026-04-05 01:10:29.546248 | orchestrator | 2026-04-05 01:10:29.546264 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2026-04-05 01:10:29.546280 | orchestrator | Sunday 05 April 2026 01:10:15 +0000 (0:00:12.895) 0:01:06.865 ********** 2026-04-05 01:10:29.546296 | orchestrator | 2026-04-05 01:10:29.546311 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2026-04-05 01:10:29.546326 | orchestrator | Sunday 05 April 2026 01:10:15 +0000 (0:00:00.063) 0:01:06.928 ********** 2026-04-05 01:10:29.546342 | orchestrator | 2026-04-05 01:10:29.546367 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2026-04-05 01:10:29.546382 | orchestrator | Sunday 05 April 2026 01:10:16 +0000 (0:00:00.073) 0:01:07.002 ********** 2026-04-05 01:10:29.546397 | orchestrator | 2026-04-05 01:10:29.546443 | orchestrator | RUNNING HANDLER [placement : Restart placement-api container] ****************** 2026-04-05 01:10:29.546458 | orchestrator | Sunday 05 April 2026 01:10:16 +0000 (0:00:00.083) 0:01:07.086 ********** 2026-04-05 01:10:29.546473 | orchestrator | changed: [testbed-node-1] 2026-04-05 01:10:29.546488 | orchestrator | changed: [testbed-node-0] 2026-04-05 01:10:29.546502 | orchestrator | changed: [testbed-node-2] 2026-04-05 01:10:29.546517 | orchestrator | 2026-04-05 01:10:29.546532 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-05 01:10:29.546547 | orchestrator | testbed-node-0 : ok=21  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-04-05 01:10:29.546614 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-04-05 01:10:29.546627 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-04-05 01:10:29.546641 | orchestrator | 2026-04-05 01:10:29.546655 | orchestrator | 2026-04-05 01:10:29.546669 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-05 01:10:29.546684 | orchestrator | Sunday 05 April 2026 01:10:26 +0000 (0:00:10.572) 0:01:17.658 ********** 2026-04-05 01:10:29.546699 | orchestrator | =============================================================================== 2026-04-05 01:10:29.546714 | orchestrator | placement : Running placement bootstrap container ---------------------- 12.90s 2026-04-05 01:10:29.546742 | orchestrator | placement : Restart placement-api container ---------------------------- 10.57s 2026-04-05 01:10:29.546757 | orchestrator | service-ks-register : placement | Creating endpoints -------------------- 8.09s 2026-04-05 01:10:29.546771 | orchestrator | service-ks-register : placement | Creating services --------------------- 4.60s 2026-04-05 01:10:29.546786 | orchestrator | service-ks-register : placement | Creating users ------------------------ 4.41s 2026-04-05 01:10:29.546800 | orchestrator | service-ks-register : placement | Granting user roles ------------------- 4.12s 2026-04-05 01:10:29.546814 | orchestrator | service-ks-register : placement | Creating roles ------------------------ 3.85s 2026-04-05 01:10:29.546829 | orchestrator | service-ks-register : placement | Creating projects --------------------- 3.80s 2026-04-05 01:10:29.546843 | orchestrator | placement : Creating placement databases -------------------------------- 2.66s 2026-04-05 01:10:29.546858 | orchestrator | placement : Creating placement databases user and setting permissions --- 2.51s 2026-04-05 01:10:29.546872 | orchestrator | placement : Ensuring config directories exist --------------------------- 2.35s 2026-04-05 01:10:29.546888 | orchestrator | placement : Copying over placement.conf --------------------------------- 2.23s 2026-04-05 01:10:29.546903 | orchestrator | service-cert-copy : placement | Copying over extra CA certificates ------ 1.80s 2026-04-05 01:10:29.546924 | orchestrator | placement : Copying over config.json files for services ----------------- 1.73s 2026-04-05 01:10:29.546939 | orchestrator | placement : Copying over migrate-db.rc.j2 configuration ----------------- 1.45s 2026-04-05 01:10:29.546953 | orchestrator | placement : Copying over placement-api wsgi configuration --------------- 1.36s 2026-04-05 01:10:29.546968 | orchestrator | placement : Check placement containers ---------------------------------- 1.24s 2026-04-05 01:10:29.546982 | orchestrator | placement : Copying over existing policy file --------------------------- 1.17s 2026-04-05 01:10:29.546998 | orchestrator | placement : include_tasks ----------------------------------------------- 1.05s 2026-04-05 01:10:29.547013 | orchestrator | placement : include_tasks ----------------------------------------------- 0.92s 2026-04-05 01:10:32.578938 | orchestrator | 2026-04-05 01:10:32 | INFO  | Task b5e2ef28-6577-4f62-94bc-3cb1b5248189 is in state STARTED 2026-04-05 01:10:32.581692 | orchestrator | 2026-04-05 01:10:32 | INFO  | Task 9e88a4a6-3dbc-4a0f-b508-16a3cd81a261 is in state STARTED 2026-04-05 01:10:32.583475 | orchestrator | 2026-04-05 01:10:32 | INFO  | Task 7e9f5de7-e495-4e99-84cd-3e1d33040039 is in state STARTED 2026-04-05 01:10:32.585205 | orchestrator | 2026-04-05 01:10:32 | INFO  | Task 134735f7-9de9-4f32-98ff-f4a374d90462 is in state STARTED 2026-04-05 01:10:32.585605 | orchestrator | 2026-04-05 01:10:32 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:10:35.615717 | orchestrator | 2026-04-05 01:10:35 | INFO  | Task b5e2ef28-6577-4f62-94bc-3cb1b5248189 is in state STARTED 2026-04-05 01:10:35.616075 | orchestrator | 2026-04-05 01:10:35 | INFO  | Task 9e88a4a6-3dbc-4a0f-b508-16a3cd81a261 is in state STARTED 2026-04-05 01:10:35.617013 | orchestrator | 2026-04-05 01:10:35 | INFO  | Task 7e9f5de7-e495-4e99-84cd-3e1d33040039 is in state STARTED 2026-04-05 01:10:35.618306 | orchestrator | 2026-04-05 01:10:35 | INFO  | Task 69a1ceb6-baaf-4521-a0c5-ab7ea18d3ce9 is in state STARTED 2026-04-05 01:10:35.618708 | orchestrator | 2026-04-05 01:10:35 | INFO  | Task 134735f7-9de9-4f32-98ff-f4a374d90462 is in state SUCCESS 2026-04-05 01:10:35.618841 | orchestrator | 2026-04-05 01:10:35 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:10:38.670812 | orchestrator | 2026-04-05 01:10:38 | INFO  | Task b5e2ef28-6577-4f62-94bc-3cb1b5248189 is in state STARTED 2026-04-05 01:10:38.672494 | orchestrator | 2026-04-05 01:10:38 | INFO  | Task 9e88a4a6-3dbc-4a0f-b508-16a3cd81a261 is in state STARTED 2026-04-05 01:10:38.674636 | orchestrator | 2026-04-05 01:10:38 | INFO  | Task 7e9f5de7-e495-4e99-84cd-3e1d33040039 is in state STARTED 2026-04-05 01:10:38.676284 | orchestrator | 2026-04-05 01:10:38 | INFO  | Task 69a1ceb6-baaf-4521-a0c5-ab7ea18d3ce9 is in state STARTED 2026-04-05 01:10:38.676336 | orchestrator | 2026-04-05 01:10:38 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:10:41.706127 | orchestrator | 2026-04-05 01:10:41 | INFO  | Task b5e2ef28-6577-4f62-94bc-3cb1b5248189 is in state STARTED 2026-04-05 01:10:41.707765 | orchestrator | 2026-04-05 01:10:41 | INFO  | Task 9e88a4a6-3dbc-4a0f-b508-16a3cd81a261 is in state STARTED 2026-04-05 01:10:41.709519 | orchestrator | 2026-04-05 01:10:41 | INFO  | Task 7e9f5de7-e495-4e99-84cd-3e1d33040039 is in state STARTED 2026-04-05 01:10:41.710675 | orchestrator | 2026-04-05 01:10:41 | INFO  | Task 69a1ceb6-baaf-4521-a0c5-ab7ea18d3ce9 is in state STARTED 2026-04-05 01:10:41.710717 | orchestrator | 2026-04-05 01:10:41 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:10:44.753641 | orchestrator | 2026-04-05 01:10:44 | INFO  | Task b5e2ef28-6577-4f62-94bc-3cb1b5248189 is in state STARTED 2026-04-05 01:10:44.753857 | orchestrator | 2026-04-05 01:10:44 | INFO  | Task 9e88a4a6-3dbc-4a0f-b508-16a3cd81a261 is in state STARTED 2026-04-05 01:10:44.755754 | orchestrator | 2026-04-05 01:10:44 | INFO  | Task 7e9f5de7-e495-4e99-84cd-3e1d33040039 is in state STARTED 2026-04-05 01:10:44.756631 | orchestrator | 2026-04-05 01:10:44 | INFO  | Task 69a1ceb6-baaf-4521-a0c5-ab7ea18d3ce9 is in state STARTED 2026-04-05 01:10:44.758172 | orchestrator | 2026-04-05 01:10:44 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:10:47.832959 | orchestrator | 2026-04-05 01:10:47 | INFO  | Task b5e2ef28-6577-4f62-94bc-3cb1b5248189 is in state STARTED 2026-04-05 01:10:47.833063 | orchestrator | 2026-04-05 01:10:47 | INFO  | Task 9e88a4a6-3dbc-4a0f-b508-16a3cd81a261 is in state STARTED 2026-04-05 01:10:47.833085 | orchestrator | 2026-04-05 01:10:47 | INFO  | Task 7e9f5de7-e495-4e99-84cd-3e1d33040039 is in state STARTED 2026-04-05 01:10:47.833103 | orchestrator | 2026-04-05 01:10:47 | INFO  | Task 69a1ceb6-baaf-4521-a0c5-ab7ea18d3ce9 is in state STARTED 2026-04-05 01:10:47.833122 | orchestrator | 2026-04-05 01:10:47 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:10:50.861038 | orchestrator | 2026-04-05 01:10:50 | INFO  | Task b5e2ef28-6577-4f62-94bc-3cb1b5248189 is in state STARTED 2026-04-05 01:10:50.865701 | orchestrator | 2026-04-05 01:10:50 | INFO  | Task 9e88a4a6-3dbc-4a0f-b508-16a3cd81a261 is in state STARTED 2026-04-05 01:10:50.866745 | orchestrator | 2026-04-05 01:10:50 | INFO  | Task 7e9f5de7-e495-4e99-84cd-3e1d33040039 is in state STARTED 2026-04-05 01:10:50.868593 | orchestrator | 2026-04-05 01:10:50 | INFO  | Task 69a1ceb6-baaf-4521-a0c5-ab7ea18d3ce9 is in state STARTED 2026-04-05 01:10:50.868795 | orchestrator | 2026-04-05 01:10:50 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:10:53.918899 | orchestrator | 2026-04-05 01:10:53 | INFO  | Task b5e2ef28-6577-4f62-94bc-3cb1b5248189 is in state STARTED 2026-04-05 01:10:53.919216 | orchestrator | 2026-04-05 01:10:53 | INFO  | Task 9e88a4a6-3dbc-4a0f-b508-16a3cd81a261 is in state STARTED 2026-04-05 01:10:53.921953 | orchestrator | 2026-04-05 01:10:53 | INFO  | Task 7e9f5de7-e495-4e99-84cd-3e1d33040039 is in state STARTED 2026-04-05 01:10:53.926540 | orchestrator | 2026-04-05 01:10:53 | INFO  | Task 69a1ceb6-baaf-4521-a0c5-ab7ea18d3ce9 is in state STARTED 2026-04-05 01:10:53.926606 | orchestrator | 2026-04-05 01:10:53 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:10:56.953121 | orchestrator | 2026-04-05 01:10:56 | INFO  | Task b5e2ef28-6577-4f62-94bc-3cb1b5248189 is in state STARTED 2026-04-05 01:10:56.954282 | orchestrator | 2026-04-05 01:10:56 | INFO  | Task 9e88a4a6-3dbc-4a0f-b508-16a3cd81a261 is in state STARTED 2026-04-05 01:10:56.955611 | orchestrator | 2026-04-05 01:10:56 | INFO  | Task 7e9f5de7-e495-4e99-84cd-3e1d33040039 is in state STARTED 2026-04-05 01:10:56.957751 | orchestrator | 2026-04-05 01:10:56 | INFO  | Task 69a1ceb6-baaf-4521-a0c5-ab7ea18d3ce9 is in state STARTED 2026-04-05 01:10:56.958081 | orchestrator | 2026-04-05 01:10:56 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:11:00.074654 | orchestrator | 2026-04-05 01:11:00 | INFO  | Task b5e2ef28-6577-4f62-94bc-3cb1b5248189 is in state STARTED 2026-04-05 01:11:00.074755 | orchestrator | 2026-04-05 01:11:00 | INFO  | Task 9e88a4a6-3dbc-4a0f-b508-16a3cd81a261 is in state STARTED 2026-04-05 01:11:00.074769 | orchestrator | 2026-04-05 01:11:00 | INFO  | Task 7e9f5de7-e495-4e99-84cd-3e1d33040039 is in state STARTED 2026-04-05 01:11:00.074779 | orchestrator | 2026-04-05 01:11:00 | INFO  | Task 69a1ceb6-baaf-4521-a0c5-ab7ea18d3ce9 is in state STARTED 2026-04-05 01:11:00.074789 | orchestrator | 2026-04-05 01:11:00 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:11:03.087878 | orchestrator | 2026-04-05 01:11:03 | INFO  | Task b5e2ef28-6577-4f62-94bc-3cb1b5248189 is in state STARTED 2026-04-05 01:11:03.089872 | orchestrator | 2026-04-05 01:11:03 | INFO  | Task 9e88a4a6-3dbc-4a0f-b508-16a3cd81a261 is in state STARTED 2026-04-05 01:11:03.091660 | orchestrator | 2026-04-05 01:11:03 | INFO  | Task 7e9f5de7-e495-4e99-84cd-3e1d33040039 is in state STARTED 2026-04-05 01:11:03.092968 | orchestrator | 2026-04-05 01:11:03 | INFO  | Task 69a1ceb6-baaf-4521-a0c5-ab7ea18d3ce9 is in state STARTED 2026-04-05 01:11:03.092998 | orchestrator | 2026-04-05 01:11:03 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:11:06.146999 | orchestrator | 2026-04-05 01:11:06 | INFO  | Task b5e2ef28-6577-4f62-94bc-3cb1b5248189 is in state STARTED 2026-04-05 01:11:06.148749 | orchestrator | 2026-04-05 01:11:06 | INFO  | Task 9e88a4a6-3dbc-4a0f-b508-16a3cd81a261 is in state STARTED 2026-04-05 01:11:06.149934 | orchestrator | 2026-04-05 01:11:06 | INFO  | Task 7e9f5de7-e495-4e99-84cd-3e1d33040039 is in state STARTED 2026-04-05 01:11:06.150988 | orchestrator | 2026-04-05 01:11:06 | INFO  | Task 69a1ceb6-baaf-4521-a0c5-ab7ea18d3ce9 is in state STARTED 2026-04-05 01:11:06.151022 | orchestrator | 2026-04-05 01:11:06 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:11:09.209126 | orchestrator | 2026-04-05 01:11:09 | INFO  | Task b5e2ef28-6577-4f62-94bc-3cb1b5248189 is in state STARTED 2026-04-05 01:11:09.212903 | orchestrator | 2026-04-05 01:11:09 | INFO  | Task 9e88a4a6-3dbc-4a0f-b508-16a3cd81a261 is in state STARTED 2026-04-05 01:11:09.214385 | orchestrator | 2026-04-05 01:11:09 | INFO  | Task 7e9f5de7-e495-4e99-84cd-3e1d33040039 is in state STARTED 2026-04-05 01:11:09.215518 | orchestrator | 2026-04-05 01:11:09 | INFO  | Task 69a1ceb6-baaf-4521-a0c5-ab7ea18d3ce9 is in state STARTED 2026-04-05 01:11:09.215554 | orchestrator | 2026-04-05 01:11:09 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:11:12.259182 | orchestrator | 2026-04-05 01:11:12 | INFO  | Task b5e2ef28-6577-4f62-94bc-3cb1b5248189 is in state STARTED 2026-04-05 01:11:12.259931 | orchestrator | 2026-04-05 01:11:12 | INFO  | Task 9e88a4a6-3dbc-4a0f-b508-16a3cd81a261 is in state STARTED 2026-04-05 01:11:12.261096 | orchestrator | 2026-04-05 01:11:12 | INFO  | Task 7e9f5de7-e495-4e99-84cd-3e1d33040039 is in state STARTED 2026-04-05 01:11:12.262278 | orchestrator | 2026-04-05 01:11:12 | INFO  | Task 69a1ceb6-baaf-4521-a0c5-ab7ea18d3ce9 is in state STARTED 2026-04-05 01:11:12.262326 | orchestrator | 2026-04-05 01:11:12 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:11:15.305179 | orchestrator | 2026-04-05 01:11:15 | INFO  | Task b5e2ef28-6577-4f62-94bc-3cb1b5248189 is in state STARTED 2026-04-05 01:11:15.307650 | orchestrator | 2026-04-05 01:11:15 | INFO  | Task 9e88a4a6-3dbc-4a0f-b508-16a3cd81a261 is in state STARTED 2026-04-05 01:11:15.309092 | orchestrator | 2026-04-05 01:11:15 | INFO  | Task 7e9f5de7-e495-4e99-84cd-3e1d33040039 is in state STARTED 2026-04-05 01:11:15.311083 | orchestrator | 2026-04-05 01:11:15 | INFO  | Task 69a1ceb6-baaf-4521-a0c5-ab7ea18d3ce9 is in state STARTED 2026-04-05 01:11:15.311138 | orchestrator | 2026-04-05 01:11:15 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:11:18.359905 | orchestrator | 2026-04-05 01:11:18 | INFO  | Task b5e2ef28-6577-4f62-94bc-3cb1b5248189 is in state STARTED 2026-04-05 01:11:18.362715 | orchestrator | 2026-04-05 01:11:18 | INFO  | Task 9e88a4a6-3dbc-4a0f-b508-16a3cd81a261 is in state STARTED 2026-04-05 01:11:18.363535 | orchestrator | 2026-04-05 01:11:18 | INFO  | Task 7e9f5de7-e495-4e99-84cd-3e1d33040039 is in state STARTED 2026-04-05 01:11:18.364600 | orchestrator | 2026-04-05 01:11:18 | INFO  | Task 69a1ceb6-baaf-4521-a0c5-ab7ea18d3ce9 is in state STARTED 2026-04-05 01:11:18.364674 | orchestrator | 2026-04-05 01:11:18 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:11:21.398851 | orchestrator | 2026-04-05 01:11:21 | INFO  | Task b5e2ef28-6577-4f62-94bc-3cb1b5248189 is in state STARTED 2026-04-05 01:11:21.399592 | orchestrator | 2026-04-05 01:11:21 | INFO  | Task 9e88a4a6-3dbc-4a0f-b508-16a3cd81a261 is in state STARTED 2026-04-05 01:11:21.400598 | orchestrator | 2026-04-05 01:11:21 | INFO  | Task 7e9f5de7-e495-4e99-84cd-3e1d33040039 is in state STARTED 2026-04-05 01:11:21.402317 | orchestrator | 2026-04-05 01:11:21 | INFO  | Task 69a1ceb6-baaf-4521-a0c5-ab7ea18d3ce9 is in state STARTED 2026-04-05 01:11:21.402342 | orchestrator | 2026-04-05 01:11:21 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:11:24.454783 | orchestrator | 2026-04-05 01:11:24 | INFO  | Task b5e2ef28-6577-4f62-94bc-3cb1b5248189 is in state STARTED 2026-04-05 01:11:24.457720 | orchestrator | 2026-04-05 01:11:24 | INFO  | Task 9e88a4a6-3dbc-4a0f-b508-16a3cd81a261 is in state STARTED 2026-04-05 01:11:24.460644 | orchestrator | 2026-04-05 01:11:24 | INFO  | Task 7e9f5de7-e495-4e99-84cd-3e1d33040039 is in state STARTED 2026-04-05 01:11:24.466305 | orchestrator | 2026-04-05 01:11:24 | INFO  | Task 69a1ceb6-baaf-4521-a0c5-ab7ea18d3ce9 is in state STARTED 2026-04-05 01:11:24.466719 | orchestrator | 2026-04-05 01:11:24 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:11:27.511580 | orchestrator | 2026-04-05 01:11:27 | INFO  | Task b5e2ef28-6577-4f62-94bc-3cb1b5248189 is in state STARTED 2026-04-05 01:11:27.513109 | orchestrator | 2026-04-05 01:11:27 | INFO  | Task 9e88a4a6-3dbc-4a0f-b508-16a3cd81a261 is in state STARTED 2026-04-05 01:11:27.520210 | orchestrator | 2026-04-05 01:11:27 | INFO  | Task 7e9f5de7-e495-4e99-84cd-3e1d33040039 is in state STARTED 2026-04-05 01:11:27.521923 | orchestrator | 2026-04-05 01:11:27 | INFO  | Task 69a1ceb6-baaf-4521-a0c5-ab7ea18d3ce9 is in state STARTED 2026-04-05 01:11:27.521993 | orchestrator | 2026-04-05 01:11:27 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:11:30.568509 | orchestrator | 2026-04-05 01:11:30 | INFO  | Task b5e2ef28-6577-4f62-94bc-3cb1b5248189 is in state STARTED 2026-04-05 01:11:30.570656 | orchestrator | 2026-04-05 01:11:30 | INFO  | Task 9e88a4a6-3dbc-4a0f-b508-16a3cd81a261 is in state STARTED 2026-04-05 01:11:30.572600 | orchestrator | 2026-04-05 01:11:30 | INFO  | Task 7e9f5de7-e495-4e99-84cd-3e1d33040039 is in state STARTED 2026-04-05 01:11:30.574473 | orchestrator | 2026-04-05 01:11:30 | INFO  | Task 69a1ceb6-baaf-4521-a0c5-ab7ea18d3ce9 is in state STARTED 2026-04-05 01:11:30.574548 | orchestrator | 2026-04-05 01:11:30 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:11:33.622066 | orchestrator | 2026-04-05 01:11:33 | INFO  | Task b5e2ef28-6577-4f62-94bc-3cb1b5248189 is in state STARTED 2026-04-05 01:11:33.625558 | orchestrator | 2026-04-05 01:11:33 | INFO  | Task 9e88a4a6-3dbc-4a0f-b508-16a3cd81a261 is in state STARTED 2026-04-05 01:11:33.627406 | orchestrator | 2026-04-05 01:11:33 | INFO  | Task 7e9f5de7-e495-4e99-84cd-3e1d33040039 is in state STARTED 2026-04-05 01:11:33.629649 | orchestrator | 2026-04-05 01:11:33 | INFO  | Task 69a1ceb6-baaf-4521-a0c5-ab7ea18d3ce9 is in state STARTED 2026-04-05 01:11:33.629851 | orchestrator | 2026-04-05 01:11:33 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:11:36.672280 | orchestrator | 2026-04-05 01:11:36 | INFO  | Task b5e2ef28-6577-4f62-94bc-3cb1b5248189 is in state STARTED 2026-04-05 01:11:36.677653 | orchestrator | 2026-04-05 01:11:36 | INFO  | Task 9e88a4a6-3dbc-4a0f-b508-16a3cd81a261 is in state SUCCESS 2026-04-05 01:11:36.677741 | orchestrator | 2026-04-05 01:11:36 | INFO  | Task 7e9f5de7-e495-4e99-84cd-3e1d33040039 is in state STARTED 2026-04-05 01:11:36.679745 | orchestrator | 2026-04-05 01:11:36.679843 | orchestrator | 2026-04-05 01:11:36.679868 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-05 01:11:36.679889 | orchestrator | 2026-04-05 01:11:36.679909 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-05 01:11:36.679929 | orchestrator | Sunday 05 April 2026 01:10:31 +0000 (0:00:00.186) 0:00:00.186 ********** 2026-04-05 01:11:36.679947 | orchestrator | ok: [testbed-node-0] 2026-04-05 01:11:36.679967 | orchestrator | ok: [testbed-node-1] 2026-04-05 01:11:36.679985 | orchestrator | ok: [testbed-node-2] 2026-04-05 01:11:36.680003 | orchestrator | 2026-04-05 01:11:36.680021 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-05 01:11:36.680040 | orchestrator | Sunday 05 April 2026 01:10:31 +0000 (0:00:00.433) 0:00:00.619 ********** 2026-04-05 01:11:36.680059 | orchestrator | ok: [testbed-node-0] => (item=enable_nova_True) 2026-04-05 01:11:36.680079 | orchestrator | ok: [testbed-node-1] => (item=enable_nova_True) 2026-04-05 01:11:36.680131 | orchestrator | ok: [testbed-node-2] => (item=enable_nova_True) 2026-04-05 01:11:36.680149 | orchestrator | 2026-04-05 01:11:36.680166 | orchestrator | PLAY [Wait for the Nova service] *********************************************** 2026-04-05 01:11:36.680184 | orchestrator | 2026-04-05 01:11:36.680203 | orchestrator | TASK [Waiting for Nova public port to be UP] *********************************** 2026-04-05 01:11:36.680215 | orchestrator | Sunday 05 April 2026 01:10:32 +0000 (0:00:00.729) 0:00:01.349 ********** 2026-04-05 01:11:36.680226 | orchestrator | ok: [testbed-node-2] 2026-04-05 01:11:36.680237 | orchestrator | ok: [testbed-node-1] 2026-04-05 01:11:36.680248 | orchestrator | ok: [testbed-node-0] 2026-04-05 01:11:36.680260 | orchestrator | 2026-04-05 01:11:36.680272 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-05 01:11:36.680284 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-05 01:11:36.680297 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-05 01:11:36.680308 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-05 01:11:36.680320 | orchestrator | 2026-04-05 01:11:36.680331 | orchestrator | 2026-04-05 01:11:36.680342 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-05 01:11:36.680353 | orchestrator | Sunday 05 April 2026 01:10:33 +0000 (0:00:01.157) 0:00:02.506 ********** 2026-04-05 01:11:36.680364 | orchestrator | =============================================================================== 2026-04-05 01:11:36.680405 | orchestrator | Waiting for Nova public port to be UP ----------------------------------- 1.16s 2026-04-05 01:11:36.680415 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.73s 2026-04-05 01:11:36.680424 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.43s 2026-04-05 01:11:36.680520 | orchestrator | 2026-04-05 01:11:36.680531 | orchestrator | 2026-04-05 01:11:36.680541 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-05 01:11:36.680550 | orchestrator | 2026-04-05 01:11:36.680560 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-05 01:11:36.680570 | orchestrator | Sunday 05 April 2026 01:09:45 +0000 (0:00:00.510) 0:00:00.510 ********** 2026-04-05 01:11:36.680579 | orchestrator | ok: [testbed-node-0] 2026-04-05 01:11:36.680589 | orchestrator | ok: [testbed-node-1] 2026-04-05 01:11:36.680598 | orchestrator | ok: [testbed-node-2] 2026-04-05 01:11:36.680609 | orchestrator | 2026-04-05 01:11:36.680619 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-05 01:11:36.680629 | orchestrator | Sunday 05 April 2026 01:09:45 +0000 (0:00:00.549) 0:00:01.060 ********** 2026-04-05 01:11:36.680638 | orchestrator | ok: [testbed-node-0] => (item=enable_magnum_True) 2026-04-05 01:11:36.680648 | orchestrator | ok: [testbed-node-1] => (item=enable_magnum_True) 2026-04-05 01:11:36.680658 | orchestrator | ok: [testbed-node-2] => (item=enable_magnum_True) 2026-04-05 01:11:36.680667 | orchestrator | 2026-04-05 01:11:36.680677 | orchestrator | PLAY [Apply role magnum] ******************************************************* 2026-04-05 01:11:36.680686 | orchestrator | 2026-04-05 01:11:36.680696 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2026-04-05 01:11:36.680705 | orchestrator | Sunday 05 April 2026 01:09:46 +0000 (0:00:00.573) 0:00:01.633 ********** 2026-04-05 01:11:36.680715 | orchestrator | included: /ansible/roles/magnum/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 01:11:36.680725 | orchestrator | 2026-04-05 01:11:36.680734 | orchestrator | TASK [service-ks-register : magnum | Creating services] ************************ 2026-04-05 01:11:36.680744 | orchestrator | Sunday 05 April 2026 01:09:46 +0000 (0:00:00.701) 0:00:02.334 ********** 2026-04-05 01:11:36.680754 | orchestrator | changed: [testbed-node-0] => (item=magnum (container-infra)) 2026-04-05 01:11:36.680763 | orchestrator | 2026-04-05 01:11:36.680788 | orchestrator | TASK [service-ks-register : magnum | Creating endpoints] *********************** 2026-04-05 01:11:36.680798 | orchestrator | Sunday 05 April 2026 01:09:51 +0000 (0:00:04.258) 0:00:06.593 ********** 2026-04-05 01:11:36.680808 | orchestrator | changed: [testbed-node-0] => (item=magnum -> https://api-int.testbed.osism.xyz:9511/v1 -> internal) 2026-04-05 01:11:36.680817 | orchestrator | changed: [testbed-node-0] => (item=magnum -> https://api.testbed.osism.xyz:9511/v1 -> public) 2026-04-05 01:11:36.680827 | orchestrator | 2026-04-05 01:11:36.680836 | orchestrator | TASK [service-ks-register : magnum | Creating projects] ************************ 2026-04-05 01:11:36.680846 | orchestrator | Sunday 05 April 2026 01:09:57 +0000 (0:00:06.355) 0:00:12.949 ********** 2026-04-05 01:11:36.680855 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-04-05 01:11:36.680868 | orchestrator | 2026-04-05 01:11:36.680884 | orchestrator | TASK [service-ks-register : magnum | Creating users] *************************** 2026-04-05 01:11:36.680900 | orchestrator | Sunday 05 April 2026 01:10:01 +0000 (0:00:04.230) 0:00:17.180 ********** 2026-04-05 01:11:36.680942 | orchestrator | changed: [testbed-node-0] => (item=magnum -> service) 2026-04-05 01:11:36.680962 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-04-05 01:11:36.680978 | orchestrator | 2026-04-05 01:11:36.680995 | orchestrator | TASK [service-ks-register : magnum | Creating roles] *************************** 2026-04-05 01:11:36.681008 | orchestrator | Sunday 05 April 2026 01:10:05 +0000 (0:00:03.796) 0:00:20.976 ********** 2026-04-05 01:11:36.681018 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-04-05 01:11:36.681028 | orchestrator | 2026-04-05 01:11:36.681038 | orchestrator | TASK [service-ks-register : magnum | Granting user roles] ********************** 2026-04-05 01:11:36.681059 | orchestrator | Sunday 05 April 2026 01:10:08 +0000 (0:00:02.882) 0:00:23.858 ********** 2026-04-05 01:11:36.681069 | orchestrator | changed: [testbed-node-0] => (item=magnum -> service -> admin) 2026-04-05 01:11:36.681078 | orchestrator | 2026-04-05 01:11:36.681088 | orchestrator | TASK [magnum : Creating Magnum trustee domain] ********************************* 2026-04-05 01:11:36.681101 | orchestrator | Sunday 05 April 2026 01:10:12 +0000 (0:00:03.635) 0:00:27.494 ********** 2026-04-05 01:11:36.681117 | orchestrator | changed: [testbed-node-0] 2026-04-05 01:11:36.681133 | orchestrator | 2026-04-05 01:11:36.681148 | orchestrator | TASK [magnum : Creating Magnum trustee user] *********************************** 2026-04-05 01:11:36.681164 | orchestrator | Sunday 05 April 2026 01:10:15 +0000 (0:00:03.380) 0:00:30.875 ********** 2026-04-05 01:11:36.681181 | orchestrator | changed: [testbed-node-0] 2026-04-05 01:11:36.681197 | orchestrator | 2026-04-05 01:11:36.681213 | orchestrator | TASK [magnum : Creating Magnum trustee user role] ****************************** 2026-04-05 01:11:36.681230 | orchestrator | Sunday 05 April 2026 01:10:20 +0000 (0:00:04.695) 0:00:35.570 ********** 2026-04-05 01:11:36.681247 | orchestrator | changed: [testbed-node-0] 2026-04-05 01:11:36.681264 | orchestrator | 2026-04-05 01:11:36.681281 | orchestrator | TASK [magnum : Ensuring config directories exist] ****************************** 2026-04-05 01:11:36.681298 | orchestrator | Sunday 05 April 2026 01:10:23 +0000 (0:00:03.298) 0:00:38.868 ********** 2026-04-05 01:11:36.681321 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-04-05 01:11:36.681341 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-04-05 01:11:36.681370 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-04-05 01:11:36.681420 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-05 01:11:36.681471 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-05 01:11:36.681487 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-05 01:11:36.681504 | orchestrator | 2026-04-05 01:11:36.681520 | orchestrator | TASK [magnum : Check if policies shall be overwritten] ************************* 2026-04-05 01:11:36.681534 | orchestrator | Sunday 05 April 2026 01:10:25 +0000 (0:00:02.215) 0:00:41.085 ********** 2026-04-05 01:11:36.681550 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:11:36.681566 | orchestrator | 2026-04-05 01:11:36.681582 | orchestrator | TASK [magnum : Set magnum policy file] ***************************************** 2026-04-05 01:11:36.681599 | orchestrator | Sunday 05 April 2026 01:10:25 +0000 (0:00:00.180) 0:00:41.265 ********** 2026-04-05 01:11:36.681616 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:11:36.681632 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:11:36.681644 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:11:36.681654 | orchestrator | 2026-04-05 01:11:36.681663 | orchestrator | TASK [magnum : Check if kubeconfig file is supplied] *************************** 2026-04-05 01:11:36.681673 | orchestrator | Sunday 05 April 2026 01:10:26 +0000 (0:00:00.454) 0:00:41.720 ********** 2026-04-05 01:11:36.681682 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-05 01:11:36.681692 | orchestrator | 2026-04-05 01:11:36.681701 | orchestrator | TASK [magnum : Copying over kubeconfig file] *********************************** 2026-04-05 01:11:36.681711 | orchestrator | Sunday 05 April 2026 01:10:27 +0000 (0:00:01.256) 0:00:42.976 ********** 2026-04-05 01:11:36.681729 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-04-05 01:11:36.681762 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-04-05 01:11:36.681773 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-04-05 01:11:36.681784 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-05 01:11:36.681794 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-05 01:11:36.681809 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-05 01:11:36.681825 | orchestrator | 2026-04-05 01:11:36.681835 | orchestrator | TASK [magnum : Set magnum kubeconfig file's path] ****************************** 2026-04-05 01:11:36.681845 | orchestrator | Sunday 05 April 2026 01:10:30 +0000 (0:00:03.085) 0:00:46.061 ********** 2026-04-05 01:11:36.681855 | orchestrator | ok: [testbed-node-0] 2026-04-05 01:11:36.681865 | orchestrator | ok: [testbed-node-1] 2026-04-05 01:11:36.681874 | orchestrator | ok: [testbed-node-2] 2026-04-05 01:11:36.681884 | orchestrator | 2026-04-05 01:11:36.681893 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2026-04-05 01:11:36.681909 | orchestrator | Sunday 05 April 2026 01:10:31 +0000 (0:00:00.496) 0:00:46.558 ********** 2026-04-05 01:11:36.681919 | orchestrator | included: /ansible/roles/magnum/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 01:11:36.681929 | orchestrator | 2026-04-05 01:11:36.681938 | orchestrator | TASK [service-cert-copy : magnum | Copying over extra CA certificates] ********* 2026-04-05 01:11:36.681948 | orchestrator | Sunday 05 April 2026 01:10:31 +0000 (0:00:00.560) 0:00:47.119 ********** 2026-04-05 01:11:36.681958 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-04-05 01:11:36.681968 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-04-05 01:11:36.681978 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-04-05 01:11:36.681999 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-05 01:11:36.682063 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-05 01:11:36.682077 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-05 01:11:36.682087 | orchestrator | 2026-04-05 01:11:36.682097 | orchestrator | TASK [service-cert-copy : magnum | Copying over backend internal TLS certificate] *** 2026-04-05 01:11:36.682106 | orchestrator | Sunday 05 April 2026 01:10:33 +0000 (0:00:02.211) 0:00:49.330 ********** 2026-04-05 01:11:36.682116 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-04-05 01:11:36.682126 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-04-05 01:11:36.682143 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:11:36.682159 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-04-05 01:11:36.682178 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-04-05 01:11:36.682189 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:11:36.682199 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-04-05 01:11:36.682209 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-04-05 01:11:36.682225 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:11:36.682234 | orchestrator | 2026-04-05 01:11:36.682244 | orchestrator | TASK [service-cert-copy : magnum | Copying over backend internal TLS key] ****** 2026-04-05 01:11:36.682254 | orchestrator | Sunday 05 April 2026 01:10:35 +0000 (0:00:01.294) 0:00:50.625 ********** 2026-04-05 01:11:36.682275 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-04-05 01:11:36.682286 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-04-05 01:11:36.682296 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:11:36.682313 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-04-05 01:11:36.682324 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-04-05 01:11:36.682334 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:11:36.682345 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-04-05 01:11:36.682378 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-04-05 01:11:36.682408 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:11:36.682452 | orchestrator | 2026-04-05 01:11:36.682470 | orchestrator | TASK [magnum : Copying over config.json files for services] ******************** 2026-04-05 01:11:36.682486 | orchestrator | Sunday 05 April 2026 01:10:36 +0000 (0:00:01.002) 0:00:51.627 ********** 2026-04-05 01:11:36.682536 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-04-05 01:11:36.682555 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-04-05 01:11:36.682572 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-04-05 01:11:36.682622 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-05 01:11:36.682647 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-05 01:11:36.682677 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-05 01:11:36.682696 | orchestrator | 2026-04-05 01:11:36.682714 | orchestrator | TASK [magnum : Copying over magnum.conf] *************************************** 2026-04-05 01:11:36.682731 | orchestrator | Sunday 05 April 2026 01:10:38 +0000 (0:00:02.345) 0:00:53.973 ********** 2026-04-05 01:11:36.682747 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-04-05 01:11:36.682765 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-04-05 01:11:36.682794 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-04-05 01:11:36.682818 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-05 01:11:36.682838 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-05 01:11:36.682849 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-05 01:11:36.682859 | orchestrator | 2026-04-05 01:11:36.682869 | orchestrator | TASK [magnum : Copying over existing policy file] ****************************** 2026-04-05 01:11:36.682885 | orchestrator | Sunday 05 April 2026 01:10:44 +0000 (0:00:05.862) 0:00:59.836 ********** 2026-04-05 01:11:36.682895 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-04-05 01:11:36.682905 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-04-05 01:11:36.682914 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:11:36.682929 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-04-05 01:11:36.682947 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-04-05 01:11:36.682957 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:11:36.682967 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-04-05 01:11:36.682983 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-04-05 01:11:36.682993 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:11:36.683003 | orchestrator | 2026-04-05 01:11:36.683013 | orchestrator | TASK [magnum : Check magnum containers] **************************************** 2026-04-05 01:11:36.683027 | orchestrator | Sunday 05 April 2026 01:10:45 +0000 (0:00:00.642) 0:01:00.478 ********** 2026-04-05 01:11:36.683049 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-04-05 01:11:36.683075 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-04-05 01:11:36.683093 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-04-05 01:11:36.683123 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-05 01:11:36.683218 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-05 01:11:36.683230 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-05 01:11:36.683240 | orchestrator | 2026-04-05 01:11:36.683255 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2026-04-05 01:11:36.683265 | orchestrator | Sunday 05 April 2026 01:10:47 +0000 (0:00:02.044) 0:01:02.523 ********** 2026-04-05 01:11:36.683275 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:11:36.683286 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:11:36.683295 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:11:36.683305 | orchestrator | 2026-04-05 01:11:36.683315 | orchestrator | TASK [magnum : Creating Magnum database] *************************************** 2026-04-05 01:11:36.683325 | orchestrator | Sunday 05 April 2026 01:10:47 +0000 (0:00:00.305) 0:01:02.828 ********** 2026-04-05 01:11:36.683334 | orchestrator | changed: [testbed-node-0] 2026-04-05 01:11:36.683344 | orchestrator | 2026-04-05 01:11:36.683353 | orchestrator | TASK [magnum : Creating Magnum database user and setting permissions] ********** 2026-04-05 01:11:36.683363 | orchestrator | Sunday 05 April 2026 01:10:50 +0000 (0:00:02.626) 0:01:05.455 ********** 2026-04-05 01:11:36.683373 | orchestrator | changed: [testbed-node-0] 2026-04-05 01:11:36.683382 | orchestrator | 2026-04-05 01:11:36.683392 | orchestrator | TASK [magnum : Running Magnum bootstrap container] ***************************** 2026-04-05 01:11:36.683401 | orchestrator | Sunday 05 April 2026 01:10:52 +0000 (0:00:02.877) 0:01:08.333 ********** 2026-04-05 01:11:36.683418 | orchestrator | changed: [testbed-node-0] 2026-04-05 01:11:36.683467 | orchestrator | 2026-04-05 01:11:36.683480 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2026-04-05 01:11:36.683489 | orchestrator | Sunday 05 April 2026 01:11:10 +0000 (0:00:17.934) 0:01:26.268 ********** 2026-04-05 01:11:36.683512 | orchestrator | 2026-04-05 01:11:36.683522 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2026-04-05 01:11:36.683534 | orchestrator | Sunday 05 April 2026 01:11:11 +0000 (0:00:00.262) 0:01:26.531 ********** 2026-04-05 01:11:36.683550 | orchestrator | 2026-04-05 01:11:36.683566 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2026-04-05 01:11:36.683582 | orchestrator | Sunday 05 April 2026 01:11:11 +0000 (0:00:00.062) 0:01:26.593 ********** 2026-04-05 01:11:36.683597 | orchestrator | 2026-04-05 01:11:36.683614 | orchestrator | RUNNING HANDLER [magnum : Restart magnum-api container] ************************ 2026-04-05 01:11:36.683630 | orchestrator | Sunday 05 April 2026 01:11:11 +0000 (0:00:00.066) 0:01:26.660 ********** 2026-04-05 01:11:36.683646 | orchestrator | changed: [testbed-node-0] 2026-04-05 01:11:36.683660 | orchestrator | changed: [testbed-node-1] 2026-04-05 01:11:36.683669 | orchestrator | changed: [testbed-node-2] 2026-04-05 01:11:36.683679 | orchestrator | 2026-04-05 01:11:36.683689 | orchestrator | RUNNING HANDLER [magnum : Restart magnum-conductor container] ****************** 2026-04-05 01:11:36.683698 | orchestrator | Sunday 05 April 2026 01:11:24 +0000 (0:00:13.255) 0:01:39.916 ********** 2026-04-05 01:11:36.683707 | orchestrator | changed: [testbed-node-0] 2026-04-05 01:11:36.683718 | orchestrator | changed: [testbed-node-2] 2026-04-05 01:11:36.683734 | orchestrator | changed: [testbed-node-1] 2026-04-05 01:11:36.683750 | orchestrator | 2026-04-05 01:11:36.683763 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-05 01:11:36.683779 | orchestrator | testbed-node-0 : ok=26  changed=18  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-04-05 01:11:36.683798 | orchestrator | testbed-node-1 : ok=13  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-04-05 01:11:36.683814 | orchestrator | testbed-node-2 : ok=13  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-04-05 01:11:36.683830 | orchestrator | 2026-04-05 01:11:36.683843 | orchestrator | 2026-04-05 01:11:36.683853 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-05 01:11:36.683863 | orchestrator | Sunday 05 April 2026 01:11:34 +0000 (0:00:10.128) 0:01:50.044 ********** 2026-04-05 01:11:36.683872 | orchestrator | =============================================================================== 2026-04-05 01:11:36.683881 | orchestrator | magnum : Running Magnum bootstrap container ---------------------------- 17.93s 2026-04-05 01:11:36.683891 | orchestrator | magnum : Restart magnum-api container ---------------------------------- 13.26s 2026-04-05 01:11:36.683900 | orchestrator | magnum : Restart magnum-conductor container ---------------------------- 10.13s 2026-04-05 01:11:36.683910 | orchestrator | service-ks-register : magnum | Creating endpoints ----------------------- 6.36s 2026-04-05 01:11:36.683919 | orchestrator | magnum : Copying over magnum.conf --------------------------------------- 5.86s 2026-04-05 01:11:36.683929 | orchestrator | magnum : Creating Magnum trustee user ----------------------------------- 4.70s 2026-04-05 01:11:36.683938 | orchestrator | service-ks-register : magnum | Creating services ------------------------ 4.26s 2026-04-05 01:11:36.683948 | orchestrator | service-ks-register : magnum | Creating projects ------------------------ 4.23s 2026-04-05 01:11:36.683957 | orchestrator | service-ks-register : magnum | Creating users --------------------------- 3.80s 2026-04-05 01:11:36.683967 | orchestrator | service-ks-register : magnum | Granting user roles ---------------------- 3.64s 2026-04-05 01:11:36.683976 | orchestrator | magnum : Creating Magnum trustee domain --------------------------------- 3.38s 2026-04-05 01:11:36.683986 | orchestrator | magnum : Creating Magnum trustee user role ------------------------------ 3.30s 2026-04-05 01:11:36.683995 | orchestrator | magnum : Copying over kubeconfig file ----------------------------------- 3.09s 2026-04-05 01:11:36.684004 | orchestrator | service-ks-register : magnum | Creating roles --------------------------- 2.88s 2026-04-05 01:11:36.684024 | orchestrator | magnum : Creating Magnum database user and setting permissions ---------- 2.88s 2026-04-05 01:11:36.684034 | orchestrator | magnum : Creating Magnum database --------------------------------------- 2.63s 2026-04-05 01:11:36.684043 | orchestrator | magnum : Copying over config.json files for services -------------------- 2.35s 2026-04-05 01:11:36.684062 | orchestrator | magnum : Ensuring config directories exist ------------------------------ 2.22s 2026-04-05 01:11:36.684072 | orchestrator | service-cert-copy : magnum | Copying over extra CA certificates --------- 2.21s 2026-04-05 01:11:36.684082 | orchestrator | magnum : Check magnum containers ---------------------------------------- 2.04s 2026-04-05 01:11:36.684092 | orchestrator | 2026-04-05 01:11:36 | INFO  | Task 69a1ceb6-baaf-4521-a0c5-ab7ea18d3ce9 is in state STARTED 2026-04-05 01:11:36.684102 | orchestrator | 2026-04-05 01:11:36 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:11:39.725663 | orchestrator | 2026-04-05 01:11:39 | INFO  | Task b5e2ef28-6577-4f62-94bc-3cb1b5248189 is in state STARTED 2026-04-05 01:11:39.726740 | orchestrator | 2026-04-05 01:11:39 | INFO  | Task 7e9f5de7-e495-4e99-84cd-3e1d33040039 is in state STARTED 2026-04-05 01:11:39.728888 | orchestrator | 2026-04-05 01:11:39 | INFO  | Task 69a1ceb6-baaf-4521-a0c5-ab7ea18d3ce9 is in state STARTED 2026-04-05 01:11:39.728990 | orchestrator | 2026-04-05 01:11:39 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:11:42.764826 | orchestrator | 2026-04-05 01:11:42 | INFO  | Task b5e2ef28-6577-4f62-94bc-3cb1b5248189 is in state STARTED 2026-04-05 01:11:42.765339 | orchestrator | 2026-04-05 01:11:42 | INFO  | Task 7e9f5de7-e495-4e99-84cd-3e1d33040039 is in state STARTED 2026-04-05 01:11:42.767894 | orchestrator | 2026-04-05 01:11:42 | INFO  | Task 69a1ceb6-baaf-4521-a0c5-ab7ea18d3ce9 is in state STARTED 2026-04-05 01:11:42.767976 | orchestrator | 2026-04-05 01:11:42 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:13:45.910256 | orchestrator | 2026-04-05 01:13:45 | INFO  | Task b5e2ef28-6577-4f62-94bc-3cb1b5248189 is in state SUCCESS 2026-04-05 01:13:45.915729 | orchestrator | 2026-04-05 01:13:45.915817 | orchestrator | 2026-04-05 01:13:45.915834 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-05 01:13:45.915847 | orchestrator | 2026-04-05 01:13:45.915858 | orchestrator | TASK [Group hosts based on OpenStack release] ********************************** 2026-04-05 01:13:45.915870 | orchestrator | Sunday 05 April 2026 01:02:56 +0000 (0:00:00.692) 0:00:00.692 ********** 2026-04-05 01:13:45.915881 | orchestrator | changed: [testbed-manager] 2026-04-05 01:13:45.915893 | orchestrator | changed: [testbed-node-0] 2026-04-05 01:13:45.915905 | orchestrator | changed: [testbed-node-1] 2026-04-05 01:13:45.915916 | orchestrator | changed: [testbed-node-2] 2026-04-05 01:13:45.915927 | orchestrator | changed: [testbed-node-3] 2026-04-05 01:13:45.915938 | orchestrator | changed: [testbed-node-4] 2026-04-05 01:13:45.915949 | orchestrator | changed: [testbed-node-5] 2026-04-05 01:13:45.915959 | orchestrator | 2026-04-05 01:13:45.915970 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-05 01:13:45.915981 | orchestrator | Sunday 05 April 2026 01:02:58 +0000 (0:00:01.896) 0:00:02.589 ********** 2026-04-05 01:13:45.915992 | orchestrator | changed: [testbed-manager] 2026-04-05 01:13:45.916002 | orchestrator | changed: [testbed-node-0] 2026-04-05 01:13:45.916013 | orchestrator | changed: [testbed-node-1] 2026-04-05 01:13:45.916024 | orchestrator | changed: [testbed-node-2] 2026-04-05 01:13:45.916035 | orchestrator | changed: [testbed-node-3] 2026-04-05 01:13:45.916045 | orchestrator | changed: [testbed-node-4] 2026-04-05 01:13:45.916056 | orchestrator | changed: [testbed-node-5] 2026-04-05 01:13:45.916067 | orchestrator | 2026-04-05 01:13:45.916078 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-05 01:13:45.916089 | orchestrator | Sunday 05 April 2026 01:02:59 +0000 (0:00:01.258) 0:00:03.848 ********** 2026-04-05 01:13:45.916100 | orchestrator | changed: [testbed-manager] => (item=enable_nova_True) 2026-04-05 01:13:45.916137 | orchestrator | changed: [testbed-node-0] => (item=enable_nova_True) 2026-04-05 01:13:45.916151 | orchestrator | changed: [testbed-node-1] => (item=enable_nova_True) 2026-04-05 01:13:45.916163 | orchestrator | changed: [testbed-node-2] => (item=enable_nova_True) 2026-04-05 01:13:45.916175 | orchestrator | changed: [testbed-node-3] => (item=enable_nova_True) 2026-04-05 01:13:45.916187 | orchestrator | changed: [testbed-node-4] => (item=enable_nova_True) 2026-04-05 01:13:45.916199 | orchestrator | changed: [testbed-node-5] => (item=enable_nova_True) 2026-04-05 01:13:45.916212 | orchestrator | 2026-04-05 01:13:45.916225 | orchestrator | PLAY [Bootstrap nova API databases] ******************************************** 2026-04-05 01:13:45.916400 | orchestrator | 2026-04-05 01:13:45.916414 | orchestrator | TASK [Bootstrap deploy] ******************************************************** 2026-04-05 01:13:45.916427 | orchestrator | Sunday 05 April 2026 01:03:00 +0000 (0:00:01.377) 0:00:05.226 ********** 2026-04-05 01:13:45.916440 | orchestrator | included: nova for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 01:13:45.916452 | orchestrator | 2026-04-05 01:13:45.916463 | orchestrator | TASK [nova : Creating Nova databases] ****************************************** 2026-04-05 01:13:45.916509 | orchestrator | Sunday 05 April 2026 01:03:01 +0000 (0:00:00.789) 0:00:06.015 ********** 2026-04-05 01:13:45.916521 | orchestrator | changed: [testbed-node-0] => (item=nova_cell0) 2026-04-05 01:13:45.916533 | orchestrator | changed: [testbed-node-0] => (item=nova_api) 2026-04-05 01:13:45.916543 | orchestrator | 2026-04-05 01:13:45.916554 | orchestrator | TASK [nova : Creating Nova databases user and setting permissions] ************* 2026-04-05 01:13:45.916565 | orchestrator | Sunday 05 April 2026 01:03:06 +0000 (0:00:04.847) 0:00:10.863 ********** 2026-04-05 01:13:45.916575 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-04-05 01:13:45.916586 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-04-05 01:13:45.916597 | orchestrator | changed: [testbed-node-0] 2026-04-05 01:13:45.916608 | orchestrator | 2026-04-05 01:13:45.916618 | orchestrator | TASK [nova : Ensuring config directories exist] ******************************** 2026-04-05 01:13:45.916629 | orchestrator | Sunday 05 April 2026 01:03:10 +0000 (0:00:04.710) 0:00:15.574 ********** 2026-04-05 01:13:45.916640 | orchestrator | changed: [testbed-node-0] 2026-04-05 01:13:45.916650 | orchestrator | 2026-04-05 01:13:45.916676 | orchestrator | TASK [nova : Copying over config.json files for nova-api-bootstrap] ************ 2026-04-05 01:13:45.916687 | orchestrator | Sunday 05 April 2026 01:03:11 +0000 (0:00:00.759) 0:00:16.334 ********** 2026-04-05 01:13:45.916698 | orchestrator | changed: [testbed-node-0] 2026-04-05 01:13:45.916709 | orchestrator | 2026-04-05 01:13:45.916719 | orchestrator | TASK [nova : Copying over nova.conf for nova-api-bootstrap] ******************** 2026-04-05 01:13:45.916730 | orchestrator | Sunday 05 April 2026 01:03:13 +0000 (0:00:01.719) 0:00:18.053 ********** 2026-04-05 01:13:45.916741 | orchestrator | changed: [testbed-node-0] 2026-04-05 01:13:45.916752 | orchestrator | 2026-04-05 01:13:45.916763 | orchestrator | TASK [nova : include_tasks] **************************************************** 2026-04-05 01:13:45.916773 | orchestrator | Sunday 05 April 2026 01:03:16 +0000 (0:00:02.834) 0:00:20.888 ********** 2026-04-05 01:13:45.916784 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:13:45.916795 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:13:45.916805 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:13:45.916816 | orchestrator | 2026-04-05 01:13:45.916827 | orchestrator | TASK [nova : Running Nova API bootstrap container] ***************************** 2026-04-05 01:13:45.916837 | orchestrator | Sunday 05 April 2026 01:03:17 +0000 (0:00:01.019) 0:00:21.908 ********** 2026-04-05 01:13:45.916848 | orchestrator | ok: [testbed-node-0] 2026-04-05 01:13:45.916859 | orchestrator | 2026-04-05 01:13:45.916869 | orchestrator | TASK [nova : Create cell0 mappings] ******************************************** 2026-04-05 01:13:45.916880 | orchestrator | Sunday 05 April 2026 01:03:51 +0000 (0:00:34.161) 0:00:56.070 ********** 2026-04-05 01:13:45.916891 | orchestrator | changed: [testbed-node-0] 2026-04-05 01:13:45.916901 | orchestrator | 2026-04-05 01:13:45.916912 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2026-04-05 01:13:45.916934 | orchestrator | Sunday 05 April 2026 01:04:08 +0000 (0:00:17.005) 0:01:13.075 ********** 2026-04-05 01:13:45.916945 | orchestrator | ok: [testbed-node-0] 2026-04-05 01:13:45.916955 | orchestrator | 2026-04-05 01:13:45.916966 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2026-04-05 01:13:45.916977 | orchestrator | Sunday 05 April 2026 01:04:24 +0000 (0:00:15.580) 0:01:28.656 ********** 2026-04-05 01:13:45.917006 | orchestrator | ok: [testbed-node-0] 2026-04-05 01:13:45.917017 | orchestrator | 2026-04-05 01:13:45.917028 | orchestrator | TASK [nova : Update cell0 mappings] ******************************************** 2026-04-05 01:13:45.917080 | orchestrator | Sunday 05 April 2026 01:04:24 +0000 (0:00:00.683) 0:01:29.340 ********** 2026-04-05 01:13:45.917091 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:13:45.917102 | orchestrator | 2026-04-05 01:13:45.917113 | orchestrator | TASK [nova : include_tasks] **************************************************** 2026-04-05 01:13:45.917123 | orchestrator | Sunday 05 April 2026 01:04:25 +0000 (0:00:00.459) 0:01:29.800 ********** 2026-04-05 01:13:45.917134 | orchestrator | included: /ansible/roles/nova/tasks/bootstrap_service.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 01:13:45.917145 | orchestrator | 2026-04-05 01:13:45.917156 | orchestrator | TASK [nova : Running Nova API bootstrap container] ***************************** 2026-04-05 01:13:45.917167 | orchestrator | Sunday 05 April 2026 01:04:25 +0000 (0:00:00.684) 0:01:30.484 ********** 2026-04-05 01:13:45.917178 | orchestrator | ok: [testbed-node-0] 2026-04-05 01:13:45.917262 | orchestrator | 2026-04-05 01:13:45.917276 | orchestrator | TASK [Bootstrap upgrade] ******************************************************* 2026-04-05 01:13:45.917287 | orchestrator | Sunday 05 April 2026 01:04:47 +0000 (0:00:22.061) 0:01:52.546 ********** 2026-04-05 01:13:45.917298 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:13:45.917308 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:13:45.917319 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:13:45.917330 | orchestrator | 2026-04-05 01:13:45.917341 | orchestrator | PLAY [Bootstrap nova cell databases] ******************************************* 2026-04-05 01:13:45.917351 | orchestrator | 2026-04-05 01:13:45.917362 | orchestrator | TASK [Bootstrap deploy] ******************************************************** 2026-04-05 01:13:45.917373 | orchestrator | Sunday 05 April 2026 01:04:48 +0000 (0:00:00.277) 0:01:52.823 ********** 2026-04-05 01:13:45.917384 | orchestrator | included: nova-cell for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 01:13:45.917394 | orchestrator | 2026-04-05 01:13:45.917405 | orchestrator | TASK [nova-cell : Creating Nova cell database] ********************************* 2026-04-05 01:13:45.917416 | orchestrator | Sunday 05 April 2026 01:04:48 +0000 (0:00:00.665) 0:01:53.489 ********** 2026-04-05 01:13:45.917426 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:13:45.917437 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:13:45.917448 | orchestrator | changed: [testbed-node-0] 2026-04-05 01:13:45.917458 | orchestrator | 2026-04-05 01:13:45.917489 | orchestrator | TASK [nova-cell : Creating Nova cell database user and setting permissions] **** 2026-04-05 01:13:45.917501 | orchestrator | Sunday 05 April 2026 01:04:51 +0000 (0:00:02.241) 0:01:55.731 ********** 2026-04-05 01:13:45.917512 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:13:45.917596 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:13:45.917609 | orchestrator | changed: [testbed-node-0] 2026-04-05 01:13:45.917620 | orchestrator | 2026-04-05 01:13:45.917631 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ vhosts exist] ****************** 2026-04-05 01:13:45.917642 | orchestrator | Sunday 05 April 2026 01:04:53 +0000 (0:00:02.313) 0:01:58.044 ********** 2026-04-05 01:13:45.917653 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:13:45.917663 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:13:45.917674 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:13:45.917685 | orchestrator | 2026-04-05 01:13:45.917696 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ users exist] ******************* 2026-04-05 01:13:45.917706 | orchestrator | Sunday 05 April 2026 01:04:54 +0000 (0:00:00.583) 0:01:58.628 ********** 2026-04-05 01:13:45.917725 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-04-05 01:13:45.917736 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:13:45.917747 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-04-05 01:13:45.917757 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:13:45.917768 | orchestrator | ok: [testbed-node-0] => (item=None) 2026-04-05 01:13:45.917779 | orchestrator | ok: [testbed-node-0 -> {{ service_rabbitmq_delegate_host }}] 2026-04-05 01:13:45.917790 | orchestrator | 2026-04-05 01:13:45.917800 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ vhosts exist] ****************** 2026-04-05 01:13:45.917818 | orchestrator | Sunday 05 April 2026 01:05:02 +0000 (0:00:08.224) 0:02:06.853 ********** 2026-04-05 01:13:45.917829 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:13:45.917840 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:13:45.917850 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:13:45.917861 | orchestrator | 2026-04-05 01:13:45.917872 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ users exist] ******************* 2026-04-05 01:13:45.917883 | orchestrator | Sunday 05 April 2026 01:05:02 +0000 (0:00:00.382) 0:02:07.236 ********** 2026-04-05 01:13:45.917894 | orchestrator | skipping: [testbed-node-0] => (item=None)  2026-04-05 01:13:45.917904 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:13:45.917915 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-04-05 01:13:45.917926 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:13:45.917936 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-04-05 01:13:45.917947 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:13:45.917958 | orchestrator | 2026-04-05 01:13:45.917968 | orchestrator | TASK [nova-cell : Ensuring config directories exist] *************************** 2026-04-05 01:13:45.917979 | orchestrator | Sunday 05 April 2026 01:05:04 +0000 (0:00:01.965) 0:02:09.201 ********** 2026-04-05 01:13:45.917990 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:13:45.918001 | orchestrator | changed: [testbed-node-0] 2026-04-05 01:13:45.918011 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:13:45.918092 | orchestrator | 2026-04-05 01:13:45.918104 | orchestrator | TASK [nova-cell : Copying over config.json files for nova-cell-bootstrap] ****** 2026-04-05 01:13:45.918115 | orchestrator | Sunday 05 April 2026 01:05:05 +0000 (0:00:00.689) 0:02:09.890 ********** 2026-04-05 01:13:45.918153 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:13:45.918165 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:13:45.918189 | orchestrator | changed: [testbed-node-0] 2026-04-05 01:13:45.918200 | orchestrator | 2026-04-05 01:13:45.918211 | orchestrator | TASK [nova-cell : Copying over nova.conf for nova-cell-bootstrap] ************** 2026-04-05 01:13:45.918222 | orchestrator | Sunday 05 April 2026 01:05:06 +0000 (0:00:01.271) 0:02:11.162 ********** 2026-04-05 01:13:45.918233 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:13:45.918244 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:13:45.918265 | orchestrator | changed: [testbed-node-0] 2026-04-05 01:13:45.918310 | orchestrator | 2026-04-05 01:13:45.918321 | orchestrator | TASK [nova-cell : Running Nova cell bootstrap container] *********************** 2026-04-05 01:13:45.918332 | orchestrator | Sunday 05 April 2026 01:05:08 +0000 (0:00:02.399) 0:02:13.562 ********** 2026-04-05 01:13:45.918343 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:13:45.918353 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:13:45.918364 | orchestrator | ok: [testbed-node-0] 2026-04-05 01:13:45.918375 | orchestrator | 2026-04-05 01:13:45.918386 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2026-04-05 01:13:45.918396 | orchestrator | Sunday 05 April 2026 01:05:33 +0000 (0:00:24.671) 0:02:38.234 ********** 2026-04-05 01:13:45.918407 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:13:45.918418 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:13:45.918429 | orchestrator | ok: [testbed-node-0] 2026-04-05 01:13:45.918439 | orchestrator | 2026-04-05 01:13:45.918450 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2026-04-05 01:13:45.918461 | orchestrator | Sunday 05 April 2026 01:05:48 +0000 (0:00:14.354) 0:02:52.588 ********** 2026-04-05 01:13:45.918527 | orchestrator | ok: [testbed-node-0] 2026-04-05 01:13:45.918540 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:13:45.918563 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:13:45.918575 | orchestrator | 2026-04-05 01:13:45.918586 | orchestrator | TASK [nova-cell : Create cell] ************************************************* 2026-04-05 01:13:45.918597 | orchestrator | Sunday 05 April 2026 01:05:48 +0000 (0:00:00.787) 0:02:53.376 ********** 2026-04-05 01:13:45.918607 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:13:45.918618 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:13:45.918628 | orchestrator | changed: [testbed-node-0] 2026-04-05 01:13:45.918639 | orchestrator | 2026-04-05 01:13:45.918650 | orchestrator | TASK [nova-cell : Update cell] ************************************************* 2026-04-05 01:13:45.918660 | orchestrator | Sunday 05 April 2026 01:06:03 +0000 (0:00:14.435) 0:03:07.811 ********** 2026-04-05 01:13:45.918671 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:13:45.918682 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:13:45.918692 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:13:45.918703 | orchestrator | 2026-04-05 01:13:45.918713 | orchestrator | TASK [Bootstrap upgrade] ******************************************************* 2026-04-05 01:13:45.918724 | orchestrator | Sunday 05 April 2026 01:06:06 +0000 (0:00:02.871) 0:03:10.683 ********** 2026-04-05 01:13:45.918735 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:13:45.918745 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:13:45.918756 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:13:45.918767 | orchestrator | 2026-04-05 01:13:45.918777 | orchestrator | PLAY [Apply role nova] ********************************************************* 2026-04-05 01:13:45.918788 | orchestrator | 2026-04-05 01:13:45.918799 | orchestrator | TASK [nova : include_tasks] **************************************************** 2026-04-05 01:13:45.918810 | orchestrator | Sunday 05 April 2026 01:06:06 +0000 (0:00:00.366) 0:03:11.049 ********** 2026-04-05 01:13:45.918820 | orchestrator | included: /ansible/roles/nova/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 01:13:45.918833 | orchestrator | 2026-04-05 01:13:45.918844 | orchestrator | TASK [service-ks-register : nova | Creating services] ************************** 2026-04-05 01:13:45.918854 | orchestrator | Sunday 05 April 2026 01:06:07 +0000 (0:00:00.798) 0:03:11.848 ********** 2026-04-05 01:13:45.918904 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy (compute_legacy))  2026-04-05 01:13:45.918915 | orchestrator | changed: [testbed-node-0] => (item=nova (compute)) 2026-04-05 01:13:45.918926 | orchestrator | 2026-04-05 01:13:45.918937 | orchestrator | TASK [service-ks-register : nova | Creating endpoints] ************************* 2026-04-05 01:13:45.918947 | orchestrator | Sunday 05 April 2026 01:06:11 +0000 (0:00:03.769) 0:03:15.617 ********** 2026-04-05 01:13:45.918958 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy -> https://api-int.testbed.osism.xyz:8774/v2/%(tenant_id)s -> internal)  2026-04-05 01:13:45.918977 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy -> https://api.testbed.osism.xyz:8774/v2/%(tenant_id)s -> public)  2026-04-05 01:13:45.918988 | orchestrator | changed: [testbed-node-0] => (item=nova -> https://api-int.testbed.osism.xyz:8774/v2.1 -> internal) 2026-04-05 01:13:45.918999 | orchestrator | changed: [testbed-node-0] => (item=nova -> https://api.testbed.osism.xyz:8774/v2.1 -> public) 2026-04-05 01:13:45.919010 | orchestrator | 2026-04-05 01:13:45.919021 | orchestrator | TASK [service-ks-register : nova | Creating projects] ************************** 2026-04-05 01:13:45.919032 | orchestrator | Sunday 05 April 2026 01:06:18 +0000 (0:00:07.330) 0:03:22.948 ********** 2026-04-05 01:13:45.919042 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-04-05 01:13:45.919053 | orchestrator | 2026-04-05 01:13:45.919064 | orchestrator | TASK [service-ks-register : nova | Creating users] ***************************** 2026-04-05 01:13:45.919075 | orchestrator | Sunday 05 April 2026 01:06:22 +0000 (0:00:03.784) 0:03:26.733 ********** 2026-04-05 01:13:45.919085 | orchestrator | changed: [testbed-node-0] => (item=nova -> service) 2026-04-05 01:13:45.919105 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-04-05 01:13:45.919116 | orchestrator | 2026-04-05 01:13:45.919127 | orchestrator | TASK [service-ks-register : nova | Creating roles] ***************************** 2026-04-05 01:13:45.919138 | orchestrator | Sunday 05 April 2026 01:06:26 +0000 (0:00:04.263) 0:03:30.997 ********** 2026-04-05 01:13:45.919148 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-04-05 01:13:45.919159 | orchestrator | 2026-04-05 01:13:45.919170 | orchestrator | TASK [service-ks-register : nova | Granting user roles] ************************ 2026-04-05 01:13:45.919180 | orchestrator | Sunday 05 April 2026 01:06:30 +0000 (0:00:03.687) 0:03:34.684 ********** 2026-04-05 01:13:45.919191 | orchestrator | changed: [testbed-node-0] => (item=nova -> service -> admin) 2026-04-05 01:13:45.919202 | orchestrator | changed: [testbed-node-0] => (item=nova -> service -> service) 2026-04-05 01:13:45.919213 | orchestrator | 2026-04-05 01:13:45.919223 | orchestrator | TASK [nova : Ensuring config directories exist] ******************************** 2026-04-05 01:13:45.919241 | orchestrator | Sunday 05 April 2026 01:06:38 +0000 (0:00:08.157) 0:03:42.842 ********** 2026-04-05 01:13:45.919259 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-04-05 01:13:45.919277 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-05 01:13:45.919296 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-04-05 01:13:45.919327 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-04-05 01:13:45.919340 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-05 01:13:45.919353 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-05 01:13:45.919364 | orchestrator | 2026-04-05 01:13:45.919376 | orchestrator | TASK [nova : Check if policies shall be overwritten] *************************** 2026-04-05 01:13:45.919387 | orchestrator | Sunday 05 April 2026 01:06:40 +0000 (0:00:02.653) 0:03:45.495 ********** 2026-04-05 01:13:45.919397 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:13:45.919408 | orchestrator | 2026-04-05 01:13:45.919419 | orchestrator | TASK [nova : Set nova policy file] ********************************************* 2026-04-05 01:13:45.919430 | orchestrator | Sunday 05 April 2026 01:06:41 +0000 (0:00:00.386) 0:03:45.882 ********** 2026-04-05 01:13:45.919440 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:13:45.919534 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:13:45.919545 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:13:45.919555 | orchestrator | 2026-04-05 01:13:45.919567 | orchestrator | TASK [nova : Check for vendordata file] **************************************** 2026-04-05 01:13:45.919587 | orchestrator | Sunday 05 April 2026 01:06:42 +0000 (0:00:00.949) 0:03:46.831 ********** 2026-04-05 01:13:45.919598 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-05 01:13:45.919609 | orchestrator | 2026-04-05 01:13:45.919620 | orchestrator | TASK [nova : Set vendordata file path] ***************************************** 2026-04-05 01:13:45.919631 | orchestrator | Sunday 05 April 2026 01:06:43 +0000 (0:00:01.131) 0:03:47.962 ********** 2026-04-05 01:13:45.919641 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:13:45.919652 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:13:45.919704 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:13:45.919716 | orchestrator | 2026-04-05 01:13:45.919742 | orchestrator | TASK [nova : include_tasks] **************************************************** 2026-04-05 01:13:45.919753 | orchestrator | Sunday 05 April 2026 01:06:44 +0000 (0:00:01.017) 0:03:48.980 ********** 2026-04-05 01:13:45.919764 | orchestrator | included: /ansible/roles/nova/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 01:13:45.919775 | orchestrator | 2026-04-05 01:13:45.919792 | orchestrator | TASK [service-cert-copy : nova | Copying over extra CA certificates] *********** 2026-04-05 01:13:45.919813 | orchestrator | Sunday 05 April 2026 01:06:46 +0000 (0:00:01.931) 0:03:50.911 ********** 2026-04-05 01:13:45.919827 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-04-05 01:13:45.919849 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-04-05 01:13:45.919863 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-04-05 01:13:45.919888 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-05 01:13:45.919901 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-05 01:13:45.919976 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-05 01:13:45.920000 | orchestrator | 2026-04-05 01:13:45.920012 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS certificate] *** 2026-04-05 01:13:45.920023 | orchestrator | Sunday 05 April 2026 01:06:49 +0000 (0:00:03.341) 0:03:54.253 ********** 2026-04-05 01:13:45.920035 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-04-05 01:13:45.920048 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-05 01:13:45.920074 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:13:45.920091 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-04-05 01:13:45.920104 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-05 01:13:45.920115 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:13:45.920135 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-04-05 01:13:45.920148 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-05 01:13:45.920169 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:13:45.920180 | orchestrator | 2026-04-05 01:13:45.920191 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS key] ******** 2026-04-05 01:13:45.920202 | orchestrator | Sunday 05 April 2026 01:06:50 +0000 (0:00:01.260) 0:03:55.513 ********** 2026-04-05 01:13:45.920219 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-04-05 01:13:45.920231 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-05 01:13:45.920243 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:13:45.920264 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-04-05 01:13:45.920277 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-04-05 01:13:45.920296 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-05 01:13:45.920341 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-05 01:13:45.920353 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:13:45.920364 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:13:45.920375 | orchestrator | 2026-04-05 01:13:45.920387 | orchestrator | TASK [nova : Copying over config.json files for services] ********************** 2026-04-05 01:13:45.920397 | orchestrator | Sunday 05 April 2026 01:06:51 +0000 (0:00:00.863) 0:03:56.377 ********** 2026-04-05 01:13:45.920416 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-04-05 01:13:45.920429 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-04-05 01:13:45.920454 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-04-05 01:13:45.920466 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-05 01:13:45.920509 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-05 01:13:45.920521 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-05 01:13:45.920533 | orchestrator | 2026-04-05 01:13:45.920544 | orchestrator | TASK [nova : Copying over nova.conf] ******************************************* 2026-04-05 01:13:45.920555 | orchestrator | Sunday 05 April 2026 01:06:54 +0000 (0:00:02.766) 0:03:59.143 ********** 2026-04-05 01:13:45.920574 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-04-05 01:13:45.920591 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-04-05 01:13:45.920610 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-04-05 01:13:45.920623 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-05 01:13:45.920641 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-05 01:13:45.920652 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-05 01:13:45.920663 | orchestrator | 2026-04-05 01:13:45.920675 | orchestrator | TASK [nova : Copying over existing policy file] ******************************** 2026-04-05 01:13:45.920685 | orchestrator | Sunday 05 April 2026 01:07:03 +0000 (0:00:09.360) 0:04:08.503 ********** 2026-04-05 01:13:45.920702 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-04-05 01:13:45.920719 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-05 01:13:45.920731 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:13:45.920743 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-04-05 01:13:45.920761 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-05 01:13:45.920773 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:13:45.920790 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-04-05 01:13:45.920802 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-05 01:13:45.920814 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:13:45.920825 | orchestrator | 2026-04-05 01:13:45.920836 | orchestrator | TASK [nova : Copying over nova-api-wsgi.conf] ********************************** 2026-04-05 01:13:45.920847 | orchestrator | Sunday 05 April 2026 01:07:04 +0000 (0:00:00.929) 0:04:09.432 ********** 2026-04-05 01:13:45.920858 | orchestrator | changed: [testbed-node-0] 2026-04-05 01:13:45.920869 | orchestrator | changed: [testbed-node-1] 2026-04-05 01:13:45.920879 | orchestrator | changed: [testbed-node-2] 2026-04-05 01:13:45.920891 | orchestrator | 2026-04-05 01:13:45.920907 | orchestrator | TASK [nova : Copying over vendordata file] ************************************* 2026-04-05 01:13:45.920919 | orchestrator | Sunday 05 April 2026 01:07:07 +0000 (0:00:02.327) 0:04:11.760 ********** 2026-04-05 01:13:45.920936 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:13:45.920947 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:13:45.920958 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:13:45.920968 | orchestrator | 2026-04-05 01:13:45.920979 | orchestrator | TASK [nova : Check nova containers] ******************************************** 2026-04-05 01:13:45.920990 | orchestrator | Sunday 05 April 2026 01:07:07 +0000 (0:00:00.533) 0:04:12.293 ********** 2026-04-05 01:13:45.921002 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-04-05 01:13:45.921022 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-04-05 01:13:45.921042 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-04-05 01:13:45.921062 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-05 01:13:45.921074 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-05 01:13:45.921085 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-05 01:13:45.921096 | orchestrator | 2026-04-05 01:13:45.921108 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2026-04-05 01:13:45.921119 | orchestrator | Sunday 05 April 2026 01:07:10 +0000 (0:00:02.791) 0:04:15.085 ********** 2026-04-05 01:13:45.921129 | orchestrator | 2026-04-05 01:13:45.921140 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2026-04-05 01:13:45.921151 | orchestrator | Sunday 05 April 2026 01:07:10 +0000 (0:00:00.467) 0:04:15.553 ********** 2026-04-05 01:13:45.921162 | orchestrator | 2026-04-05 01:13:45.921173 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2026-04-05 01:13:45.921183 | orchestrator | Sunday 05 April 2026 01:07:11 +0000 (0:00:00.424) 0:04:15.978 ********** 2026-04-05 01:13:45.921194 | orchestrator | 2026-04-05 01:13:45.921205 | orchestrator | RUNNING HANDLER [nova : Restart nova-scheduler container] ********************** 2026-04-05 01:13:45.921215 | orchestrator | Sunday 05 April 2026 01:07:11 +0000 (0:00:00.582) 0:04:16.560 ********** 2026-04-05 01:13:45.921226 | orchestrator | changed: [testbed-node-0] 2026-04-05 01:13:45.921237 | orchestrator | changed: [testbed-node-2] 2026-04-05 01:13:45.921247 | orchestrator | changed: [testbed-node-1] 2026-04-05 01:13:45.921258 | orchestrator | 2026-04-05 01:13:45.921269 | orchestrator | RUNNING HANDLER [nova : Restart nova-api container] **************************** 2026-04-05 01:13:45.921285 | orchestrator | Sunday 05 April 2026 01:07:35 +0000 (0:00:23.706) 0:04:40.267 ********** 2026-04-05 01:13:45.921296 | orchestrator | changed: [testbed-node-0] 2026-04-05 01:13:45.921306 | orchestrator | changed: [testbed-node-1] 2026-04-05 01:13:45.921317 | orchestrator | changed: [testbed-node-2] 2026-04-05 01:13:45.921328 | orchestrator | 2026-04-05 01:13:45.921339 | orchestrator | PLAY [Apply role nova-cell] **************************************************** 2026-04-05 01:13:45.921350 | orchestrator | 2026-04-05 01:13:45.921360 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-04-05 01:13:45.921371 | orchestrator | Sunday 05 April 2026 01:07:43 +0000 (0:00:07.616) 0:04:47.884 ********** 2026-04-05 01:13:45.921390 | orchestrator | included: /ansible/roles/nova-cell/tasks/deploy.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 01:13:45.921401 | orchestrator | 2026-04-05 01:13:45.921412 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-04-05 01:13:45.921422 | orchestrator | Sunday 05 April 2026 01:07:44 +0000 (0:00:01.573) 0:04:49.458 ********** 2026-04-05 01:13:45.921433 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:13:45.921444 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:13:45.921454 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:13:45.921465 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:13:45.921493 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:13:45.921504 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:13:45.921515 | orchestrator | 2026-04-05 01:13:45.921526 | orchestrator | TASK [Load and persist br_netfilter module] ************************************ 2026-04-05 01:13:45.921536 | orchestrator | Sunday 05 April 2026 01:07:46 +0000 (0:00:01.884) 0:04:51.342 ********** 2026-04-05 01:13:45.921548 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:13:45.921558 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:13:45.921569 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:13:45.921580 | orchestrator | included: module-load for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-05 01:13:45.921590 | orchestrator | 2026-04-05 01:13:45.921601 | orchestrator | TASK [module-load : Load modules] ********************************************** 2026-04-05 01:13:45.921618 | orchestrator | Sunday 05 April 2026 01:07:48 +0000 (0:00:01.613) 0:04:52.956 ********** 2026-04-05 01:13:45.921629 | orchestrator | ok: [testbed-node-3] => (item=br_netfilter) 2026-04-05 01:13:45.921640 | orchestrator | ok: [testbed-node-4] => (item=br_netfilter) 2026-04-05 01:13:45.921651 | orchestrator | ok: [testbed-node-5] => (item=br_netfilter) 2026-04-05 01:13:45.921662 | orchestrator | 2026-04-05 01:13:45.921673 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2026-04-05 01:13:45.921683 | orchestrator | Sunday 05 April 2026 01:07:50 +0000 (0:00:01.678) 0:04:54.634 ********** 2026-04-05 01:13:45.921694 | orchestrator | changed: [testbed-node-4] => (item=br_netfilter) 2026-04-05 01:13:45.921705 | orchestrator | changed: [testbed-node-3] => (item=br_netfilter) 2026-04-05 01:13:45.921715 | orchestrator | changed: [testbed-node-5] => (item=br_netfilter) 2026-04-05 01:13:45.921726 | orchestrator | 2026-04-05 01:13:45.921737 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2026-04-05 01:13:45.921748 | orchestrator | Sunday 05 April 2026 01:07:51 +0000 (0:00:01.255) 0:04:55.889 ********** 2026-04-05 01:13:45.921759 | orchestrator | skipping: [testbed-node-3] => (item=br_netfilter)  2026-04-05 01:13:45.921770 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:13:45.921780 | orchestrator | skipping: [testbed-node-4] => (item=br_netfilter)  2026-04-05 01:13:45.921791 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:13:45.921802 | orchestrator | skipping: [testbed-node-5] => (item=br_netfilter)  2026-04-05 01:13:45.921812 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:13:45.921823 | orchestrator | 2026-04-05 01:13:45.921834 | orchestrator | TASK [nova-cell : Enable bridge-nf-call sysctl variables] ********************** 2026-04-05 01:13:45.921845 | orchestrator | Sunday 05 April 2026 01:07:52 +0000 (0:00:00.835) 0:04:56.725 ********** 2026-04-05 01:13:45.921855 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-iptables)  2026-04-05 01:13:45.921866 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-04-05 01:13:45.921877 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:13:45.921888 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-iptables)  2026-04-05 01:13:45.921899 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-04-05 01:13:45.921910 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:13:45.921920 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-iptables)  2026-04-05 01:13:45.921941 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-04-05 01:13:45.921952 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:13:45.921963 | orchestrator | changed: [testbed-node-5] => (item=net.bridge.bridge-nf-call-iptables) 2026-04-05 01:13:45.921974 | orchestrator | changed: [testbed-node-4] => (item=net.bridge.bridge-nf-call-iptables) 2026-04-05 01:13:45.921984 | orchestrator | changed: [testbed-node-3] => (item=net.bridge.bridge-nf-call-iptables) 2026-04-05 01:13:45.921995 | orchestrator | changed: [testbed-node-5] => (item=net.bridge.bridge-nf-call-ip6tables) 2026-04-05 01:13:45.922006 | orchestrator | changed: [testbed-node-4] => (item=net.bridge.bridge-nf-call-ip6tables) 2026-04-05 01:13:45.922453 | orchestrator | changed: [testbed-node-3] => (item=net.bridge.bridge-nf-call-ip6tables) 2026-04-05 01:13:45.922500 | orchestrator | 2026-04-05 01:13:45.922511 | orchestrator | TASK [nova-cell : Install udev kolla kvm rules] ******************************** 2026-04-05 01:13:45.922522 | orchestrator | Sunday 05 April 2026 01:07:53 +0000 (0:00:01.789) 0:04:58.514 ********** 2026-04-05 01:13:45.922533 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:13:45.922544 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:13:45.922554 | orchestrator | changed: [testbed-node-3] 2026-04-05 01:13:45.922565 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:13:45.922576 | orchestrator | changed: [testbed-node-4] 2026-04-05 01:13:45.922587 | orchestrator | changed: [testbed-node-5] 2026-04-05 01:13:45.922598 | orchestrator | 2026-04-05 01:13:45.922616 | orchestrator | TASK [nova-cell : Mask qemu-kvm service] *************************************** 2026-04-05 01:13:45.922627 | orchestrator | Sunday 05 April 2026 01:07:55 +0000 (0:00:01.645) 0:05:00.160 ********** 2026-04-05 01:13:45.922637 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:13:45.922648 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:13:45.922659 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:13:45.922670 | orchestrator | changed: [testbed-node-3] 2026-04-05 01:13:45.922680 | orchestrator | changed: [testbed-node-5] 2026-04-05 01:13:45.922691 | orchestrator | changed: [testbed-node-4] 2026-04-05 01:13:45.922701 | orchestrator | 2026-04-05 01:13:45.922712 | orchestrator | TASK [nova-cell : Ensuring config directories exist] *************************** 2026-04-05 01:13:45.922723 | orchestrator | Sunday 05 April 2026 01:07:57 +0000 (0:00:01.876) 0:05:02.036 ********** 2026-04-05 01:13:45.922735 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-04-05 01:13:45.922763 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-04-05 01:13:45.922786 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-04-05 01:13:45.922798 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-04-05 01:13:45.922816 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-04-05 01:13:45.922828 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-05 01:13:45.922841 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-04-05 01:13:45.922859 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-05 01:13:45.922871 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-05 01:13:45.922890 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-05 01:13:45.922907 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-05 01:13:45.922919 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-05 01:13:45.922975 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-05 01:13:45.923008 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-05 01:13:45.923027 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-05 01:13:45.923039 | orchestrator | 2026-04-05 01:13:45.923050 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-04-05 01:13:45.923061 | orchestrator | Sunday 05 April 2026 01:08:00 +0000 (0:00:02.589) 0:05:04.625 ********** 2026-04-05 01:13:45.923072 | orchestrator | included: /ansible/roles/nova-cell/tasks/copy-certs.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 01:13:45.923084 | orchestrator | 2026-04-05 01:13:45.923097 | orchestrator | TASK [service-cert-copy : nova | Copying over extra CA certificates] *********** 2026-04-05 01:13:45.923110 | orchestrator | Sunday 05 April 2026 01:08:01 +0000 (0:00:01.231) 0:05:05.857 ********** 2026-04-05 01:13:45.923123 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-04-05 01:13:45.923214 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-04-05 01:13:45.923239 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-04-05 01:13:45.923261 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-05 01:13:45.923276 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-05 01:13:45.923359 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-05 01:13:45.923375 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-04-05 01:13:45.923395 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-04-05 01:13:45.923408 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-04-05 01:13:45.923428 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-05 01:13:45.923449 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-05 01:13:45.923461 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-05 01:13:45.923494 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-05 01:13:45.923514 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-05 01:13:45.923527 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-05 01:13:45.923539 | orchestrator | 2026-04-05 01:13:45.923550 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS certificate] *** 2026-04-05 01:13:45.923561 | orchestrator | Sunday 05 April 2026 01:08:05 +0000 (0:00:04.040) 0:05:09.897 ********** 2026-04-05 01:13:45.923586 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-04-05 01:13:45.923598 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-04-05 01:13:45.923610 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-04-05 01:13:45.923621 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:13:45.923638 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-04-05 01:13:45.923651 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-04-05 01:13:45.923668 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-04-05 01:13:45.923686 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:13:45.923698 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-04-05 01:13:45.923709 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-05 01:13:45.923720 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:13:45.923732 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-04-05 01:13:45.923748 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-04-05 01:13:45.923761 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-04-05 01:13:45.923779 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:13:45.923796 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-04-05 01:13:45.923808 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-05 01:13:45.923819 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:13:45.923830 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-04-05 01:13:45.923842 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-05 01:13:45.923853 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:13:45.923879 | orchestrator | 2026-04-05 01:13:45.923893 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS key] ******** 2026-04-05 01:13:45.923905 | orchestrator | Sunday 05 April 2026 01:08:07 +0000 (0:00:01.811) 0:05:11.709 ********** 2026-04-05 01:13:45.923923 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-04-05 01:13:45.923936 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-04-05 01:13:45.923962 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-04-05 01:13:45.923975 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:13:45.923988 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-04-05 01:13:45.924000 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-04-05 01:13:45.924013 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-04-05 01:13:45.924025 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:13:45.924042 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-04-05 01:13:45.924063 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-04-05 01:13:45.924082 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-04-05 01:13:45.924107 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:13:45.924121 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-04-05 01:13:45.924134 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-05 01:13:45.924146 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:13:45.924159 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-04-05 01:13:45.924204 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-05 01:13:45.924224 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:13:45.924237 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-04-05 01:13:45.924256 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-05 01:13:45.924269 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:13:45.924281 | orchestrator | 2026-04-05 01:13:45.924293 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-04-05 01:13:45.924305 | orchestrator | Sunday 05 April 2026 01:08:09 +0000 (0:00:02.146) 0:05:13.855 ********** 2026-04-05 01:13:45.924317 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:13:45.924329 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:13:45.924341 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:13:45.924352 | orchestrator | included: /ansible/roles/nova-cell/tasks/external_ceph.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-05 01:13:45.924364 | orchestrator | 2026-04-05 01:13:45.924376 | orchestrator | TASK [nova-cell : Check nova keyring file] ************************************* 2026-04-05 01:13:45.924387 | orchestrator | Sunday 05 April 2026 01:08:10 +0000 (0:00:00.865) 0:05:14.720 ********** 2026-04-05 01:13:45.924399 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-04-05 01:13:45.924411 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-04-05 01:13:45.924423 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-04-05 01:13:45.924434 | orchestrator | 2026-04-05 01:13:45.924446 | orchestrator | TASK [nova-cell : Check cinder keyring file] *********************************** 2026-04-05 01:13:45.924458 | orchestrator | Sunday 05 April 2026 01:08:11 +0000 (0:00:00.895) 0:05:15.616 ********** 2026-04-05 01:13:45.924488 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-04-05 01:13:45.924500 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-04-05 01:13:45.924511 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-04-05 01:13:45.924523 | orchestrator | 2026-04-05 01:13:45.924534 | orchestrator | TASK [nova-cell : Extract nova key from file] ********************************** 2026-04-05 01:13:45.924546 | orchestrator | Sunday 05 April 2026 01:08:12 +0000 (0:00:00.996) 0:05:16.612 ********** 2026-04-05 01:13:45.924558 | orchestrator | ok: [testbed-node-3] 2026-04-05 01:13:45.924569 | orchestrator | ok: [testbed-node-4] 2026-04-05 01:13:45.924581 | orchestrator | ok: [testbed-node-5] 2026-04-05 01:13:45.924592 | orchestrator | 2026-04-05 01:13:45.924604 | orchestrator | TASK [nova-cell : Extract cinder key from file] ******************************** 2026-04-05 01:13:45.924615 | orchestrator | Sunday 05 April 2026 01:08:12 +0000 (0:00:00.458) 0:05:17.071 ********** 2026-04-05 01:13:45.924633 | orchestrator | ok: [testbed-node-3] 2026-04-05 01:13:45.924645 | orchestrator | ok: [testbed-node-4] 2026-04-05 01:13:45.924656 | orchestrator | ok: [testbed-node-5] 2026-04-05 01:13:45.924668 | orchestrator | 2026-04-05 01:13:45.924680 | orchestrator | TASK [nova-cell : Copy over ceph nova keyring file] **************************** 2026-04-05 01:13:45.924691 | orchestrator | Sunday 05 April 2026 01:08:12 +0000 (0:00:00.475) 0:05:17.546 ********** 2026-04-05 01:13:45.924703 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2026-04-05 01:13:45.924715 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2026-04-05 01:13:45.924726 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2026-04-05 01:13:45.924738 | orchestrator | 2026-04-05 01:13:45.924749 | orchestrator | TASK [nova-cell : Copy over ceph cinder keyring file] ************************** 2026-04-05 01:13:45.924761 | orchestrator | Sunday 05 April 2026 01:08:14 +0000 (0:00:01.237) 0:05:18.783 ********** 2026-04-05 01:13:45.924772 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2026-04-05 01:13:45.924784 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2026-04-05 01:13:45.924795 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2026-04-05 01:13:45.924807 | orchestrator | 2026-04-05 01:13:45.924819 | orchestrator | TASK [nova-cell : Copy over ceph.conf] ***************************************** 2026-04-05 01:13:45.924830 | orchestrator | Sunday 05 April 2026 01:08:15 +0000 (0:00:01.275) 0:05:20.059 ********** 2026-04-05 01:13:45.924846 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2026-04-05 01:13:45.924858 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2026-04-05 01:13:45.924869 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2026-04-05 01:13:45.924881 | orchestrator | changed: [testbed-node-3] => (item=nova-libvirt) 2026-04-05 01:13:45.924893 | orchestrator | changed: [testbed-node-5] => (item=nova-libvirt) 2026-04-05 01:13:45.924905 | orchestrator | changed: [testbed-node-4] => (item=nova-libvirt) 2026-04-05 01:13:45.924916 | orchestrator | 2026-04-05 01:13:45.924928 | orchestrator | TASK [nova-cell : Ensure /etc/ceph directory exists (host libvirt)] ************ 2026-04-05 01:13:45.924940 | orchestrator | Sunday 05 April 2026 01:08:19 +0000 (0:00:04.458) 0:05:24.518 ********** 2026-04-05 01:13:45.924951 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:13:45.924963 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:13:45.924974 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:13:45.924985 | orchestrator | 2026-04-05 01:13:45.924997 | orchestrator | TASK [nova-cell : Copy over ceph.conf (host libvirt)] ************************** 2026-04-05 01:13:45.925009 | orchestrator | Sunday 05 April 2026 01:08:20 +0000 (0:00:00.300) 0:05:24.819 ********** 2026-04-05 01:13:45.925020 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:13:45.925032 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:13:45.925044 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:13:45.925055 | orchestrator | 2026-04-05 01:13:45.925067 | orchestrator | TASK [nova-cell : Ensuring libvirt secrets directory exists] ******************* 2026-04-05 01:13:45.925078 | orchestrator | Sunday 05 April 2026 01:08:20 +0000 (0:00:00.301) 0:05:25.120 ********** 2026-04-05 01:13:45.925090 | orchestrator | changed: [testbed-node-3] 2026-04-05 01:13:45.925101 | orchestrator | changed: [testbed-node-4] 2026-04-05 01:13:45.925113 | orchestrator | changed: [testbed-node-5] 2026-04-05 01:13:45.925124 | orchestrator | 2026-04-05 01:13:45.925136 | orchestrator | TASK [nova-cell : Pushing nova secret xml for libvirt] ************************* 2026-04-05 01:13:45.925147 | orchestrator | Sunday 05 April 2026 01:08:22 +0000 (0:00:01.536) 0:05:26.657 ********** 2026-04-05 01:13:45.925165 | orchestrator | changed: [testbed-node-3] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2026-04-05 01:13:45.925177 | orchestrator | changed: [testbed-node-4] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2026-04-05 01:13:45.925189 | orchestrator | changed: [testbed-node-5] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2026-04-05 01:13:45.925208 | orchestrator | changed: [testbed-node-3] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2026-04-05 01:13:45.925220 | orchestrator | changed: [testbed-node-4] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2026-04-05 01:13:45.925232 | orchestrator | changed: [testbed-node-5] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2026-04-05 01:13:45.925243 | orchestrator | 2026-04-05 01:13:45.925255 | orchestrator | TASK [nova-cell : Pushing secrets key for libvirt] ***************************** 2026-04-05 01:13:45.925267 | orchestrator | Sunday 05 April 2026 01:08:25 +0000 (0:00:03.228) 0:05:29.885 ********** 2026-04-05 01:13:45.925278 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-04-05 01:13:45.925290 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-04-05 01:13:45.925301 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-04-05 01:13:45.925313 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-04-05 01:13:45.925324 | orchestrator | changed: [testbed-node-3] 2026-04-05 01:13:45.925336 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-04-05 01:13:45.925347 | orchestrator | changed: [testbed-node-4] 2026-04-05 01:13:45.925359 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-04-05 01:13:45.925370 | orchestrator | changed: [testbed-node-5] 2026-04-05 01:13:45.925382 | orchestrator | 2026-04-05 01:13:45.925393 | orchestrator | TASK [nova-cell : Include tasks from qemu_wrapper.yml] ************************* 2026-04-05 01:13:45.925405 | orchestrator | Sunday 05 April 2026 01:08:28 +0000 (0:00:03.242) 0:05:33.127 ********** 2026-04-05 01:13:45.925416 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:13:45.925428 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:13:45.925439 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:13:45.925451 | orchestrator | included: /ansible/roles/nova-cell/tasks/qemu_wrapper.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-05 01:13:45.925462 | orchestrator | 2026-04-05 01:13:45.925499 | orchestrator | TASK [nova-cell : Check qemu wrapper file] ************************************* 2026-04-05 01:13:45.925511 | orchestrator | Sunday 05 April 2026 01:08:31 +0000 (0:00:03.064) 0:05:36.192 ********** 2026-04-05 01:13:45.925522 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-04-05 01:13:45.925534 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-04-05 01:13:45.925545 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-04-05 01:13:45.925557 | orchestrator | 2026-04-05 01:13:45.925568 | orchestrator | TASK [nova-cell : Copy qemu wrapper] ******************************************* 2026-04-05 01:13:45.925580 | orchestrator | Sunday 05 April 2026 01:08:33 +0000 (0:00:01.468) 0:05:37.661 ********** 2026-04-05 01:13:45.925591 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:13:45.925602 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:13:45.925614 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:13:45.925625 | orchestrator | 2026-04-05 01:13:45.925636 | orchestrator | TASK [nova-cell : Check if policies shall be overwritten] ********************** 2026-04-05 01:13:45.925647 | orchestrator | Sunday 05 April 2026 01:08:33 +0000 (0:00:00.301) 0:05:37.962 ********** 2026-04-05 01:13:45.925659 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:13:45.925670 | orchestrator | 2026-04-05 01:13:45.925681 | orchestrator | TASK [nova-cell : Set nova policy file] **************************************** 2026-04-05 01:13:45.925702 | orchestrator | Sunday 05 April 2026 01:08:33 +0000 (0:00:00.135) 0:05:38.098 ********** 2026-04-05 01:13:45.925714 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:13:45.925725 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:13:45.925736 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:13:45.925748 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:13:45.925759 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:13:45.925770 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:13:45.925781 | orchestrator | 2026-04-05 01:13:45.925792 | orchestrator | TASK [nova-cell : Check for vendordata file] *********************************** 2026-04-05 01:13:45.925810 | orchestrator | Sunday 05 April 2026 01:08:34 +0000 (0:00:00.827) 0:05:38.926 ********** 2026-04-05 01:13:45.925822 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-04-05 01:13:45.925833 | orchestrator | 2026-04-05 01:13:45.925844 | orchestrator | TASK [nova-cell : Set vendordata file path] ************************************ 2026-04-05 01:13:45.925855 | orchestrator | Sunday 05 April 2026 01:08:35 +0000 (0:00:00.790) 0:05:39.716 ********** 2026-04-05 01:13:45.925866 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:13:45.925877 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:13:45.925888 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:13:45.925899 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:13:45.925911 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:13:45.925922 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:13:45.925933 | orchestrator | 2026-04-05 01:13:45.925945 | orchestrator | TASK [nova-cell : Copying over config.json files for services] ***************** 2026-04-05 01:13:45.925956 | orchestrator | Sunday 05 April 2026 01:08:35 +0000 (0:00:00.725) 0:05:40.442 ********** 2026-04-05 01:13:45.925976 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-04-05 01:13:45.925989 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-04-05 01:13:45.926002 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-05 01:13:45.926014 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-05 01:13:45.926086 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-05 01:13:45.926099 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-04-05 01:13:45.926120 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-04-05 01:13:45.926133 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-04-05 01:13:45.926145 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-05 01:13:45.926157 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-04-05 01:13:45.926174 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-05 01:13:45.926193 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-05 01:13:45.926240 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-05 01:13:45.926253 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-05 01:13:45.926265 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-05 01:13:45.926277 | orchestrator | 2026-04-05 01:13:45.926289 | orchestrator | TASK [nova-cell : Copying over nova.conf] ************************************** 2026-04-05 01:13:45.926301 | orchestrator | Sunday 05 April 2026 01:08:40 +0000 (0:00:04.916) 0:05:45.359 ********** 2026-04-05 01:13:45.926313 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-04-05 01:13:45.926338 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-04-05 01:13:45.926352 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-04-05 01:13:45.926370 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-04-05 01:13:45.926383 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-04-05 01:13:45.926395 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-04-05 01:13:45.926415 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-05 01:13:45.926433 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-05 01:13:45.926445 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-05 01:13:45.926465 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-05 01:13:45.926500 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-05 01:13:45.926512 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-05 01:13:45.926532 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-05 01:13:45.926550 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-05 01:13:45.926563 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-05 01:13:45.926575 | orchestrator | 2026-04-05 01:13:45.926587 | orchestrator | TASK [nova-cell : Copying over Nova compute provider config] ******************* 2026-04-05 01:13:45.926599 | orchestrator | Sunday 05 April 2026 01:08:51 +0000 (0:00:10.395) 0:05:55.754 ********** 2026-04-05 01:13:45.926610 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:13:45.926622 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:13:45.926634 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:13:45.926645 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:13:45.926661 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:13:45.926673 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:13:45.926684 | orchestrator | 2026-04-05 01:13:45.926694 | orchestrator | TASK [nova-cell : Copying over libvirt configuration] ************************** 2026-04-05 01:13:45.926706 | orchestrator | Sunday 05 April 2026 01:08:53 +0000 (0:00:02.800) 0:05:58.555 ********** 2026-04-05 01:13:45.926717 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2026-04-05 01:13:45.926727 | orchestrator | changed: [testbed-node-3] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2026-04-05 01:13:45.926738 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2026-04-05 01:13:45.926749 | orchestrator | changed: [testbed-node-5] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2026-04-05 01:13:45.926760 | orchestrator | changed: [testbed-node-4] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2026-04-05 01:13:45.926771 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2026-04-05 01:13:45.926782 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2026-04-05 01:13:45.926793 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:13:45.926803 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2026-04-05 01:13:45.926821 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:13:45.926832 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2026-04-05 01:13:45.926843 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:13:45.926853 | orchestrator | changed: [testbed-node-3] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2026-04-05 01:13:45.926864 | orchestrator | changed: [testbed-node-4] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2026-04-05 01:13:45.926875 | orchestrator | changed: [testbed-node-5] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2026-04-05 01:13:45.926886 | orchestrator | 2026-04-05 01:13:45.926897 | orchestrator | TASK [nova-cell : Copying over libvirt TLS keys] ******************************* 2026-04-05 01:13:45.926908 | orchestrator | Sunday 05 April 2026 01:09:00 +0000 (0:00:06.225) 0:06:04.780 ********** 2026-04-05 01:13:45.926919 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:13:45.926929 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:13:45.926940 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:13:45.926951 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:13:45.926961 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:13:45.926972 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:13:45.926983 | orchestrator | 2026-04-05 01:13:45.926993 | orchestrator | TASK [nova-cell : Copying over libvirt SASL configuration] ********************* 2026-04-05 01:13:45.927004 | orchestrator | Sunday 05 April 2026 01:09:00 +0000 (0:00:00.692) 0:06:05.473 ********** 2026-04-05 01:13:45.927016 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2026-04-05 01:13:45.927027 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2026-04-05 01:13:45.927037 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2026-04-05 01:13:45.927048 | orchestrator | changed: [testbed-node-4] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2026-04-05 01:13:45.927059 | orchestrator | changed: [testbed-node-3] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2026-04-05 01:13:45.927075 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2026-04-05 01:13:45.927086 | orchestrator | changed: [testbed-node-5] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2026-04-05 01:13:45.927097 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2026-04-05 01:13:45.927107 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2026-04-05 01:13:45.927118 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2026-04-05 01:13:45.927129 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:13:45.927140 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2026-04-05 01:13:45.927150 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:13:45.927161 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2026-04-05 01:13:45.927172 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:13:45.927182 | orchestrator | changed: [testbed-node-4] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2026-04-05 01:13:45.927193 | orchestrator | changed: [testbed-node-3] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2026-04-05 01:13:45.927204 | orchestrator | changed: [testbed-node-5] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2026-04-05 01:13:45.927215 | orchestrator | changed: [testbed-node-4] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2026-04-05 01:13:45.927247 | orchestrator | changed: [testbed-node-3] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2026-04-05 01:13:45.927258 | orchestrator | changed: [testbed-node-5] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2026-04-05 01:13:45.927269 | orchestrator | 2026-04-05 01:13:45.927280 | orchestrator | TASK [nova-cell : Copying files for nova-ssh] ********************************** 2026-04-05 01:13:45.927291 | orchestrator | Sunday 05 April 2026 01:09:06 +0000 (0:00:05.775) 0:06:11.249 ********** 2026-04-05 01:13:45.927302 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2026-04-05 01:13:45.927313 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2026-04-05 01:13:45.927324 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2026-04-05 01:13:45.927334 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2026-04-05 01:13:45.927345 | orchestrator | changed: [testbed-node-3] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-04-05 01:13:45.927356 | orchestrator | changed: [testbed-node-5] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-04-05 01:13:45.927367 | orchestrator | changed: [testbed-node-4] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-04-05 01:13:45.927377 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2026-04-05 01:13:45.927388 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2026-04-05 01:13:45.927398 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-04-05 01:13:45.927409 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-04-05 01:13:45.927420 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-04-05 01:13:45.927431 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2026-04-05 01:13:45.927441 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:13:45.927452 | orchestrator | changed: [testbed-node-3] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-04-05 01:13:45.927462 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2026-04-05 01:13:45.927493 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:13:45.927504 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2026-04-05 01:13:45.927515 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:13:45.927525 | orchestrator | changed: [testbed-node-4] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-04-05 01:13:45.927536 | orchestrator | changed: [testbed-node-5] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-04-05 01:13:45.927546 | orchestrator | changed: [testbed-node-3] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-04-05 01:13:45.927557 | orchestrator | changed: [testbed-node-4] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-04-05 01:13:45.927568 | orchestrator | changed: [testbed-node-5] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-04-05 01:13:45.927578 | orchestrator | changed: [testbed-node-3] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-04-05 01:13:45.927589 | orchestrator | changed: [testbed-node-4] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-04-05 01:13:45.927609 | orchestrator | changed: [testbed-node-5] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-04-05 01:13:45.927621 | orchestrator | 2026-04-05 01:13:45.927631 | orchestrator | TASK [nova-cell : Copying VMware vCenter CA file] ****************************** 2026-04-05 01:13:45.927642 | orchestrator | Sunday 05 April 2026 01:09:15 +0000 (0:00:09.214) 0:06:20.464 ********** 2026-04-05 01:13:45.927653 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:13:45.927663 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:13:45.927681 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:13:45.927692 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:13:45.927702 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:13:45.927713 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:13:45.927724 | orchestrator | 2026-04-05 01:13:45.927735 | orchestrator | TASK [nova-cell : Copying 'release' file for nova_compute] ********************* 2026-04-05 01:13:45.927746 | orchestrator | Sunday 05 April 2026 01:09:16 +0000 (0:00:00.590) 0:06:21.054 ********** 2026-04-05 01:13:45.927757 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:13:45.927767 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:13:45.927778 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:13:45.927788 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:13:45.927799 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:13:45.927809 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:13:45.927819 | orchestrator | 2026-04-05 01:13:45.927830 | orchestrator | TASK [nova-cell : Generating 'hostnqn' file for nova_compute] ****************** 2026-04-05 01:13:45.927840 | orchestrator | Sunday 05 April 2026 01:09:17 +0000 (0:00:00.827) 0:06:21.881 ********** 2026-04-05 01:13:45.927851 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:13:45.927861 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:13:45.927872 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:13:45.927883 | orchestrator | changed: [testbed-node-4] 2026-04-05 01:13:45.927893 | orchestrator | changed: [testbed-node-3] 2026-04-05 01:13:45.927904 | orchestrator | changed: [testbed-node-5] 2026-04-05 01:13:45.927914 | orchestrator | 2026-04-05 01:13:45.927925 | orchestrator | TASK [nova-cell : Generating 'hostid' file for nova_compute] ******************* 2026-04-05 01:13:45.927936 | orchestrator | Sunday 05 April 2026 01:09:19 +0000 (0:00:01.981) 0:06:23.863 ********** 2026-04-05 01:13:45.927946 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:13:45.927964 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:13:45.927975 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:13:45.927985 | orchestrator | changed: [testbed-node-3] 2026-04-05 01:13:45.927995 | orchestrator | changed: [testbed-node-4] 2026-04-05 01:13:45.928006 | orchestrator | changed: [testbed-node-5] 2026-04-05 01:13:45.928016 | orchestrator | 2026-04-05 01:13:45.928027 | orchestrator | TASK [nova-cell : Copying over existing policy file] *************************** 2026-04-05 01:13:45.928038 | orchestrator | Sunday 05 April 2026 01:09:21 +0000 (0:00:02.455) 0:06:26.318 ********** 2026-04-05 01:13:45.928050 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-04-05 01:13:45.928062 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-04-05 01:13:45.928075 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-04-05 01:13:45.928094 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:13:45.928112 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-04-05 01:13:45.928124 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-04-05 01:13:45.928144 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-04-05 01:13:45.928157 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:13:45.928169 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-04-05 01:13:45.928181 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-05 01:13:45.928201 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:13:45.928228 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-04-05 01:13:45.928241 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-04-05 01:13:45.928260 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-04-05 01:13:45.928272 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:13:45.928284 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-04-05 01:13:45.928296 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-05 01:13:45.928308 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:13:45.928320 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-04-05 01:13:45.928339 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-05 01:13:45.928351 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:13:45.928363 | orchestrator | 2026-04-05 01:13:45.928375 | orchestrator | TASK [nova-cell : Copying over vendordata file to containers] ****************** 2026-04-05 01:13:45.928391 | orchestrator | Sunday 05 April 2026 01:09:24 +0000 (0:00:02.728) 0:06:29.046 ********** 2026-04-05 01:13:45.928403 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute)  2026-04-05 01:13:45.928415 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute-ironic)  2026-04-05 01:13:45.928426 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:13:45.928436 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute)  2026-04-05 01:13:45.928447 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute-ironic)  2026-04-05 01:13:45.928458 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:13:45.928486 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute)  2026-04-05 01:13:45.928499 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute-ironic)  2026-04-05 01:13:45.928510 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:13:45.928520 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute)  2026-04-05 01:13:45.928531 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute-ironic)  2026-04-05 01:13:45.928542 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:13:45.928552 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute)  2026-04-05 01:13:45.928563 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute-ironic)  2026-04-05 01:13:45.928574 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:13:45.928584 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute)  2026-04-05 01:13:45.928595 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute-ironic)  2026-04-05 01:13:45.928605 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:13:45.928616 | orchestrator | 2026-04-05 01:13:45.928627 | orchestrator | TASK [nova-cell : Check nova-cell containers] ********************************** 2026-04-05 01:13:45.928638 | orchestrator | Sunday 05 April 2026 01:09:26 +0000 (0:00:02.165) 0:06:31.212 ********** 2026-04-05 01:13:45.928657 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-04-05 01:13:45.928676 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-04-05 01:13:45.928690 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-04-05 01:13:45.928707 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-05 01:13:45.928720 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-04-05 01:13:45.928739 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-05 01:13:45.928752 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-05 01:13:45.928771 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-04-05 01:13:45.928783 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-04-05 01:13:45.928795 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-05 01:13:45.928812 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-05 01:13:45.928824 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-05 01:13:45.928844 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-05 01:13:45.928863 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-05 01:13:45.928875 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-05 01:13:45.928888 | orchestrator | 2026-04-05 01:13:45.928899 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-04-05 01:13:45.928911 | orchestrator | Sunday 05 April 2026 01:09:30 +0000 (0:00:04.269) 0:06:35.481 ********** 2026-04-05 01:13:45.928923 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:13:45.928934 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:13:45.928946 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:13:45.928957 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:13:45.928967 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:13:45.928978 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:13:45.928989 | orchestrator | 2026-04-05 01:13:45.929000 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-04-05 01:13:45.929011 | orchestrator | Sunday 05 April 2026 01:09:31 +0000 (0:00:00.921) 0:06:36.403 ********** 2026-04-05 01:13:45.929022 | orchestrator | 2026-04-05 01:13:45.929033 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-04-05 01:13:45.929053 | orchestrator | Sunday 05 April 2026 01:09:31 +0000 (0:00:00.145) 0:06:36.549 ********** 2026-04-05 01:13:45.929072 | orchestrator | 2026-04-05 01:13:45.929092 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-04-05 01:13:45.929127 | orchestrator | Sunday 05 April 2026 01:09:32 +0000 (0:00:00.160) 0:06:36.709 ********** 2026-04-05 01:13:45.929153 | orchestrator | 2026-04-05 01:13:45.929174 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-04-05 01:13:45.929193 | orchestrator | Sunday 05 April 2026 01:09:32 +0000 (0:00:00.139) 0:06:36.849 ********** 2026-04-05 01:13:45.929213 | orchestrator | 2026-04-05 01:13:45.929231 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-04-05 01:13:45.929251 | orchestrator | Sunday 05 April 2026 01:09:32 +0000 (0:00:00.135) 0:06:36.985 ********** 2026-04-05 01:13:45.929271 | orchestrator | 2026-04-05 01:13:45.929290 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-04-05 01:13:45.929311 | orchestrator | Sunday 05 April 2026 01:09:32 +0000 (0:00:00.313) 0:06:37.298 ********** 2026-04-05 01:13:45.929329 | orchestrator | 2026-04-05 01:13:45.929350 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-conductor container] ***************** 2026-04-05 01:13:45.929369 | orchestrator | Sunday 05 April 2026 01:09:32 +0000 (0:00:00.130) 0:06:37.429 ********** 2026-04-05 01:13:45.929387 | orchestrator | changed: [testbed-node-0] 2026-04-05 01:13:45.929421 | orchestrator | changed: [testbed-node-2] 2026-04-05 01:13:45.929441 | orchestrator | changed: [testbed-node-1] 2026-04-05 01:13:45.929460 | orchestrator | 2026-04-05 01:13:45.929512 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-novncproxy container] **************** 2026-04-05 01:13:45.929532 | orchestrator | Sunday 05 April 2026 01:09:41 +0000 (0:00:08.523) 0:06:45.952 ********** 2026-04-05 01:13:45.929549 | orchestrator | changed: [testbed-node-0] 2026-04-05 01:13:45.929566 | orchestrator | changed: [testbed-node-1] 2026-04-05 01:13:45.929582 | orchestrator | changed: [testbed-node-2] 2026-04-05 01:13:45.929600 | orchestrator | 2026-04-05 01:13:45.929616 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-ssh container] *********************** 2026-04-05 01:13:45.929633 | orchestrator | Sunday 05 April 2026 01:09:55 +0000 (0:00:13.771) 0:06:59.724 ********** 2026-04-05 01:13:45.929651 | orchestrator | changed: [testbed-node-5] 2026-04-05 01:13:45.929669 | orchestrator | changed: [testbed-node-3] 2026-04-05 01:13:45.929687 | orchestrator | changed: [testbed-node-4] 2026-04-05 01:13:45.929704 | orchestrator | 2026-04-05 01:13:45.929735 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-libvirt container] ******************* 2026-04-05 01:13:45.929754 | orchestrator | Sunday 05 April 2026 01:10:18 +0000 (0:00:23.619) 0:07:23.343 ********** 2026-04-05 01:13:45.929773 | orchestrator | changed: [testbed-node-3] 2026-04-05 01:13:45.929792 | orchestrator | changed: [testbed-node-5] 2026-04-05 01:13:45.929810 | orchestrator | changed: [testbed-node-4] 2026-04-05 01:13:45.929829 | orchestrator | 2026-04-05 01:13:45.929849 | orchestrator | RUNNING HANDLER [nova-cell : Checking libvirt container is ready] ************** 2026-04-05 01:13:45.929867 | orchestrator | Sunday 05 April 2026 01:10:48 +0000 (0:00:29.703) 0:07:53.046 ********** 2026-04-05 01:13:45.929884 | orchestrator | FAILED - RETRYING: [testbed-node-4]: Checking libvirt container is ready (10 retries left). 2026-04-05 01:13:45.929897 | orchestrator | changed: [testbed-node-3] 2026-04-05 01:13:45.929907 | orchestrator | FAILED - RETRYING: [testbed-node-5]: Checking libvirt container is ready (10 retries left). 2026-04-05 01:13:45.929918 | orchestrator | changed: [testbed-node-4] 2026-04-05 01:13:45.929929 | orchestrator | changed: [testbed-node-5] 2026-04-05 01:13:45.929939 | orchestrator | 2026-04-05 01:13:45.929952 | orchestrator | RUNNING HANDLER [nova-cell : Create libvirt SASL user] ************************* 2026-04-05 01:13:45.929971 | orchestrator | Sunday 05 April 2026 01:10:54 +0000 (0:00:06.253) 0:07:59.300 ********** 2026-04-05 01:13:45.929988 | orchestrator | changed: [testbed-node-3] 2026-04-05 01:13:45.930006 | orchestrator | changed: [testbed-node-4] 2026-04-05 01:13:45.930177 | orchestrator | changed: [testbed-node-5] 2026-04-05 01:13:45.930200 | orchestrator | 2026-04-05 01:13:45.930213 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-compute container] ******************* 2026-04-05 01:13:45.930224 | orchestrator | Sunday 05 April 2026 01:10:55 +0000 (0:00:00.756) 0:08:00.057 ********** 2026-04-05 01:13:45.930242 | orchestrator | changed: [testbed-node-3] 2026-04-05 01:13:45.930260 | orchestrator | changed: [testbed-node-4] 2026-04-05 01:13:45.930278 | orchestrator | changed: [testbed-node-5] 2026-04-05 01:13:45.930296 | orchestrator | 2026-04-05 01:13:45.930313 | orchestrator | RUNNING HANDLER [nova-cell : Wait for nova-compute services to update service versions] *** 2026-04-05 01:13:45.930330 | orchestrator | Sunday 05 April 2026 01:11:16 +0000 (0:00:21.315) 0:08:21.372 ********** 2026-04-05 01:13:45.930349 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:13:45.930368 | orchestrator | 2026-04-05 01:13:45.930386 | orchestrator | TASK [nova-cell : Waiting for nova-compute services to register themselves] **** 2026-04-05 01:13:45.930404 | orchestrator | Sunday 05 April 2026 01:11:17 +0000 (0:00:00.332) 0:08:21.705 ********** 2026-04-05 01:13:45.930422 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:13:45.930439 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:13:45.930457 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:13:45.930506 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:13:45.930525 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:13:45.930544 | orchestrator | FAILED - RETRYING: [testbed-node-5 -> testbed-node-0]: Waiting for nova-compute services to register themselves (20 retries left). 2026-04-05 01:13:45.930586 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-04-05 01:13:45.930605 | orchestrator | 2026-04-05 01:13:45.930622 | orchestrator | TASK [nova-cell : Fail if nova-compute service failed to register] ************* 2026-04-05 01:13:45.930640 | orchestrator | Sunday 05 April 2026 01:11:40 +0000 (0:00:23.593) 0:08:45.298 ********** 2026-04-05 01:13:45.930658 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:13:45.930676 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:13:45.930695 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:13:45.930713 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:13:45.930732 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:13:45.930745 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:13:45.930756 | orchestrator | 2026-04-05 01:13:45.930767 | orchestrator | TASK [nova-cell : Include discover_computes.yml] ******************************* 2026-04-05 01:13:45.930777 | orchestrator | Sunday 05 April 2026 01:11:49 +0000 (0:00:08.751) 0:08:54.050 ********** 2026-04-05 01:13:45.930788 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:13:45.930798 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:13:45.930809 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:13:45.930828 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:13:45.930839 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:13:45.930850 | orchestrator | included: /ansible/roles/nova-cell/tasks/discover_computes.yml for testbed-node-5 2026-04-05 01:13:45.930860 | orchestrator | 2026-04-05 01:13:45.930871 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2026-04-05 01:13:45.930882 | orchestrator | Sunday 05 April 2026 01:11:53 +0000 (0:00:04.039) 0:08:58.090 ********** 2026-04-05 01:13:45.930893 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-04-05 01:13:45.930904 | orchestrator | 2026-04-05 01:13:45.930915 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2026-04-05 01:13:45.930926 | orchestrator | Sunday 05 April 2026 01:12:10 +0000 (0:00:17.152) 0:09:15.242 ********** 2026-04-05 01:13:45.930937 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-04-05 01:13:45.930947 | orchestrator | 2026-04-05 01:13:45.930958 | orchestrator | TASK [nova-cell : Fail if cell settings not found] ***************************** 2026-04-05 01:13:45.930969 | orchestrator | Sunday 05 April 2026 01:12:12 +0000 (0:00:01.441) 0:09:16.684 ********** 2026-04-05 01:13:45.930980 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:13:45.930990 | orchestrator | 2026-04-05 01:13:45.931001 | orchestrator | TASK [nova-cell : Discover nova hosts] ***************************************** 2026-04-05 01:13:45.931012 | orchestrator | Sunday 05 April 2026 01:12:13 +0000 (0:00:01.410) 0:09:18.095 ********** 2026-04-05 01:13:45.931022 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-04-05 01:13:45.931033 | orchestrator | 2026-04-05 01:13:45.931043 | orchestrator | TASK [nova-cell : Remove old nova_libvirt_secrets container volume] ************ 2026-04-05 01:13:45.931054 | orchestrator | Sunday 05 April 2026 01:12:28 +0000 (0:00:15.087) 0:09:33.182 ********** 2026-04-05 01:13:45.931065 | orchestrator | ok: [testbed-node-3] 2026-04-05 01:13:45.931076 | orchestrator | ok: [testbed-node-4] 2026-04-05 01:13:45.931087 | orchestrator | ok: [testbed-node-5] 2026-04-05 01:13:45.931097 | orchestrator | ok: [testbed-node-0] 2026-04-05 01:13:45.931108 | orchestrator | ok: [testbed-node-1] 2026-04-05 01:13:45.931119 | orchestrator | ok: [testbed-node-2] 2026-04-05 01:13:45.931129 | orchestrator | 2026-04-05 01:13:45.931207 | orchestrator | PLAY [Refresh nova scheduler cell cache] *************************************** 2026-04-05 01:13:45.931229 | orchestrator | 2026-04-05 01:13:45.931248 | orchestrator | TASK [nova : Refresh cell cache in nova scheduler] ***************************** 2026-04-05 01:13:45.931267 | orchestrator | Sunday 05 April 2026 01:12:30 +0000 (0:00:01.923) 0:09:35.106 ********** 2026-04-05 01:13:45.931286 | orchestrator | changed: [testbed-node-0] 2026-04-05 01:13:45.931306 | orchestrator | changed: [testbed-node-1] 2026-04-05 01:13:45.931325 | orchestrator | changed: [testbed-node-2] 2026-04-05 01:13:45.931356 | orchestrator | 2026-04-05 01:13:45.931368 | orchestrator | PLAY [Reload global Nova super conductor services] ***************************** 2026-04-05 01:13:45.931378 | orchestrator | 2026-04-05 01:13:45.931389 | orchestrator | TASK [nova : Reload nova super conductor services to remove RPC version pin] *** 2026-04-05 01:13:45.931400 | orchestrator | Sunday 05 April 2026 01:12:31 +0000 (0:00:01.185) 0:09:36.292 ********** 2026-04-05 01:13:45.931411 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:13:45.931421 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:13:45.931432 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:13:45.931443 | orchestrator | 2026-04-05 01:13:45.931453 | orchestrator | PLAY [Reload Nova cell services] *********************************************** 2026-04-05 01:13:45.931464 | orchestrator | 2026-04-05 01:13:45.931505 | orchestrator | TASK [nova-cell : Reload nova cell services to remove RPC version cap] ********* 2026-04-05 01:13:45.931517 | orchestrator | Sunday 05 April 2026 01:12:32 +0000 (0:00:00.531) 0:09:36.824 ********** 2026-04-05 01:13:45.931528 | orchestrator | skipping: [testbed-node-3] => (item=nova-conductor)  2026-04-05 01:13:45.931539 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute)  2026-04-05 01:13:45.931550 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute-ironic)  2026-04-05 01:13:45.931560 | orchestrator | skipping: [testbed-node-3] => (item=nova-novncproxy)  2026-04-05 01:13:45.931571 | orchestrator | skipping: [testbed-node-3] => (item=nova-serialproxy)  2026-04-05 01:13:45.931581 | orchestrator | skipping: [testbed-node-3] => (item=nova-spicehtml5proxy)  2026-04-05 01:13:45.931592 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:13:45.931603 | orchestrator | skipping: [testbed-node-4] => (item=nova-conductor)  2026-04-05 01:13:45.931614 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute)  2026-04-05 01:13:45.931624 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute-ironic)  2026-04-05 01:13:45.931635 | orchestrator | skipping: [testbed-node-4] => (item=nova-novncproxy)  2026-04-05 01:13:45.931645 | orchestrator | skipping: [testbed-node-4] => (item=nova-serialproxy)  2026-04-05 01:13:45.931656 | orchestrator | skipping: [testbed-node-4] => (item=nova-spicehtml5proxy)  2026-04-05 01:13:45.931667 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:13:45.931678 | orchestrator | skipping: [testbed-node-5] => (item=nova-conductor)  2026-04-05 01:13:45.931689 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute)  2026-04-05 01:13:45.931699 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute-ironic)  2026-04-05 01:13:45.931711 | orchestrator | skipping: [testbed-node-5] => (item=nova-novncproxy)  2026-04-05 01:13:45.931721 | orchestrator | skipping: [testbed-node-5] => (item=nova-serialproxy)  2026-04-05 01:13:45.931732 | orchestrator | skipping: [testbed-node-5] => (item=nova-spicehtml5proxy)  2026-04-05 01:13:45.931743 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:13:45.931753 | orchestrator | skipping: [testbed-node-0] => (item=nova-conductor)  2026-04-05 01:13:45.931764 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute)  2026-04-05 01:13:45.931775 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute-ironic)  2026-04-05 01:13:45.931785 | orchestrator | skipping: [testbed-node-0] => (item=nova-novncproxy)  2026-04-05 01:13:45.931797 | orchestrator | skipping: [testbed-node-0] => (item=nova-serialproxy)  2026-04-05 01:13:45.931808 | orchestrator | skipping: [testbed-node-0] => (item=nova-spicehtml5proxy)  2026-04-05 01:13:45.931826 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:13:45.931837 | orchestrator | skipping: [testbed-node-1] => (item=nova-conductor)  2026-04-05 01:13:45.931848 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute)  2026-04-05 01:13:45.931859 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute-ironic)  2026-04-05 01:13:45.931869 | orchestrator | skipping: [testbed-node-1] => (item=nova-novncproxy)  2026-04-05 01:13:45.931880 | orchestrator | skipping: [testbed-node-1] => (item=nova-serialproxy)  2026-04-05 01:13:45.931891 | orchestrator | skipping: [testbed-node-1] => (item=nova-spicehtml5proxy)  2026-04-05 01:13:45.931909 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:13:45.931920 | orchestrator | skipping: [testbed-node-2] => (item=nova-conductor)  2026-04-05 01:13:45.931931 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute)  2026-04-05 01:13:45.931941 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute-ironic)  2026-04-05 01:13:45.931952 | orchestrator | skipping: [testbed-node-2] => (item=nova-novncproxy)  2026-04-05 01:13:45.931963 | orchestrator | skipping: [testbed-node-2] => (item=nova-serialproxy)  2026-04-05 01:13:45.931973 | orchestrator | skipping: [testbed-node-2] => (item=nova-spicehtml5proxy)  2026-04-05 01:13:45.931984 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:13:45.931994 | orchestrator | 2026-04-05 01:13:45.932005 | orchestrator | PLAY [Reload global Nova API services] ***************************************** 2026-04-05 01:13:45.932016 | orchestrator | 2026-04-05 01:13:45.932026 | orchestrator | TASK [nova : Reload nova API services to remove RPC version pin] *************** 2026-04-05 01:13:45.932037 | orchestrator | Sunday 05 April 2026 01:12:33 +0000 (0:00:01.336) 0:09:38.160 ********** 2026-04-05 01:13:45.932048 | orchestrator | skipping: [testbed-node-0] => (item=nova-scheduler)  2026-04-05 01:13:45.932059 | orchestrator | skipping: [testbed-node-0] => (item=nova-api)  2026-04-05 01:13:45.932069 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:13:45.932087 | orchestrator | skipping: [testbed-node-1] => (item=nova-scheduler)  2026-04-05 01:13:45.932105 | orchestrator | skipping: [testbed-node-1] => (item=nova-api)  2026-04-05 01:13:45.932178 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:13:45.932199 | orchestrator | skipping: [testbed-node-2] => (item=nova-scheduler)  2026-04-05 01:13:45.932215 | orchestrator | skipping: [testbed-node-2] => (item=nova-api)  2026-04-05 01:13:45.932230 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:13:45.932246 | orchestrator | 2026-04-05 01:13:45.932263 | orchestrator | PLAY [Run Nova API online data migrations] ************************************* 2026-04-05 01:13:45.932280 | orchestrator | 2026-04-05 01:13:45.932296 | orchestrator | TASK [nova : Run Nova API online database migrations] ************************** 2026-04-05 01:13:45.932311 | orchestrator | Sunday 05 April 2026 01:12:34 +0000 (0:00:00.786) 0:09:38.946 ********** 2026-04-05 01:13:45.932328 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:13:45.932344 | orchestrator | 2026-04-05 01:13:45.932362 | orchestrator | PLAY [Run Nova cell online data migrations] ************************************ 2026-04-05 01:13:45.932380 | orchestrator | 2026-04-05 01:13:45.932398 | orchestrator | TASK [nova-cell : Run Nova cell online database migrations] ******************** 2026-04-05 01:13:45.932416 | orchestrator | Sunday 05 April 2026 01:12:35 +0000 (0:00:00.774) 0:09:39.721 ********** 2026-04-05 01:13:45.932435 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:13:45.932453 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:13:45.932497 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:13:45.932519 | orchestrator | 2026-04-05 01:13:45.932537 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-05 01:13:45.932555 | orchestrator | testbed-manager : ok=3  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-05 01:13:45.932575 | orchestrator | testbed-node-0 : ok=54  changed=35  unreachable=0 failed=0 skipped=46  rescued=0 ignored=0 2026-04-05 01:13:45.932594 | orchestrator | testbed-node-1 : ok=27  changed=19  unreachable=0 failed=0 skipped=53  rescued=0 ignored=0 2026-04-05 01:13:45.932612 | orchestrator | testbed-node-2 : ok=27  changed=19  unreachable=0 failed=0 skipped=53  rescued=0 ignored=0 2026-04-05 01:13:45.932631 | orchestrator | testbed-node-3 : ok=41  changed=28  unreachable=0 failed=0 skipped=22  rescued=0 ignored=0 2026-04-05 01:13:45.932649 | orchestrator | testbed-node-4 : ok=40  changed=28  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2026-04-05 01:13:45.932683 | orchestrator | testbed-node-5 : ok=45  changed=28  unreachable=0 failed=0 skipped=19  rescued=0 ignored=0 2026-04-05 01:13:45.932701 | orchestrator | 2026-04-05 01:13:45.932719 | orchestrator | 2026-04-05 01:13:45.932737 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-05 01:13:45.932756 | orchestrator | Sunday 05 April 2026 01:12:35 +0000 (0:00:00.460) 0:09:40.182 ********** 2026-04-05 01:13:45.932774 | orchestrator | =============================================================================== 2026-04-05 01:13:45.932792 | orchestrator | nova : Running Nova API bootstrap container ---------------------------- 34.16s 2026-04-05 01:13:45.932812 | orchestrator | nova-cell : Restart nova-libvirt container ----------------------------- 29.70s 2026-04-05 01:13:45.932831 | orchestrator | nova-cell : Running Nova cell bootstrap container ---------------------- 24.67s 2026-04-05 01:13:45.932851 | orchestrator | nova : Restart nova-scheduler container -------------------------------- 23.71s 2026-04-05 01:13:45.932871 | orchestrator | nova-cell : Restart nova-ssh container --------------------------------- 23.62s 2026-04-05 01:13:45.932899 | orchestrator | nova-cell : Waiting for nova-compute services to register themselves --- 23.59s 2026-04-05 01:13:45.932917 | orchestrator | nova : Running Nova API bootstrap container ---------------------------- 22.06s 2026-04-05 01:13:45.932934 | orchestrator | nova-cell : Restart nova-compute container ----------------------------- 21.32s 2026-04-05 01:13:45.932954 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 17.15s 2026-04-05 01:13:45.932973 | orchestrator | nova : Create cell0 mappings ------------------------------------------- 17.01s 2026-04-05 01:13:45.932991 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 15.58s 2026-04-05 01:13:45.933012 | orchestrator | nova-cell : Discover nova hosts ---------------------------------------- 15.09s 2026-04-05 01:13:45.933028 | orchestrator | nova-cell : Create cell ------------------------------------------------ 14.44s 2026-04-05 01:13:45.933044 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 14.35s 2026-04-05 01:13:45.933061 | orchestrator | nova-cell : Restart nova-novncproxy container -------------------------- 13.77s 2026-04-05 01:13:45.933078 | orchestrator | nova-cell : Copying over nova.conf ------------------------------------- 10.40s 2026-04-05 01:13:45.933096 | orchestrator | nova : Copying over nova.conf ------------------------------------------- 9.36s 2026-04-05 01:13:45.933114 | orchestrator | nova-cell : Copying files for nova-ssh ---------------------------------- 9.21s 2026-04-05 01:13:45.933133 | orchestrator | nova-cell : Fail if nova-compute service failed to register ------------- 8.75s 2026-04-05 01:13:45.933150 | orchestrator | nova-cell : Restart nova-conductor container ---------------------------- 8.52s 2026-04-05 01:13:45.933168 | orchestrator | 2026-04-05 01:13:45.933186 | orchestrator | 2026-04-05 01:13:45.933203 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-05 01:13:45.933220 | orchestrator | 2026-04-05 01:13:45.933237 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-05 01:13:45.933329 | orchestrator | Sunday 05 April 2026 01:10:23 +0000 (0:00:00.330) 0:00:00.331 ********** 2026-04-05 01:13:45.933353 | orchestrator | ok: [testbed-node-0] 2026-04-05 01:13:45.933372 | orchestrator | ok: [testbed-node-1] 2026-04-05 01:13:45.933390 | orchestrator | ok: [testbed-node-2] 2026-04-05 01:13:45.933407 | orchestrator | 2026-04-05 01:13:45.933424 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-05 01:13:45.933442 | orchestrator | Sunday 05 April 2026 01:10:23 +0000 (0:00:00.543) 0:00:00.874 ********** 2026-04-05 01:13:45.933459 | orchestrator | ok: [testbed-node-0] => (item=enable_grafana_True) 2026-04-05 01:13:45.933506 | orchestrator | ok: [testbed-node-1] => (item=enable_grafana_True) 2026-04-05 01:13:45.933525 | orchestrator | ok: [testbed-node-2] => (item=enable_grafana_True) 2026-04-05 01:13:45.933543 | orchestrator | 2026-04-05 01:13:45.933562 | orchestrator | PLAY [Apply role grafana] ****************************************************** 2026-04-05 01:13:45.933595 | orchestrator | 2026-04-05 01:13:45.933612 | orchestrator | TASK [grafana : include_tasks] ************************************************* 2026-04-05 01:13:45.933630 | orchestrator | Sunday 05 April 2026 01:10:24 +0000 (0:00:00.553) 0:00:01.427 ********** 2026-04-05 01:13:45.933648 | orchestrator | included: /ansible/roles/grafana/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 01:13:45.933666 | orchestrator | 2026-04-05 01:13:45.933682 | orchestrator | TASK [grafana : Ensuring config directories exist] ***************************** 2026-04-05 01:13:45.933698 | orchestrator | Sunday 05 April 2026 01:10:25 +0000 (0:00:00.714) 0:00:02.142 ********** 2026-04-05 01:13:45.933715 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-04-05 01:13:45.933735 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-04-05 01:13:45.933763 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-04-05 01:13:45.933781 | orchestrator | 2026-04-05 01:13:45.933800 | orchestrator | TASK [grafana : Check if extra configuration file exists] ********************** 2026-04-05 01:13:45.933820 | orchestrator | Sunday 05 April 2026 01:10:26 +0000 (0:00:01.220) 0:00:03.363 ********** 2026-04-05 01:13:45.933839 | orchestrator | [WARNING]: Skipped '/operations/prometheus/grafana' path due to this access 2026-04-05 01:13:45.933856 | orchestrator | issue: '/operations/prometheus/grafana' is not a directory 2026-04-05 01:13:45.933873 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-05 01:13:45.933890 | orchestrator | 2026-04-05 01:13:45.933907 | orchestrator | TASK [grafana : include_tasks] ************************************************* 2026-04-05 01:13:45.933926 | orchestrator | Sunday 05 April 2026 01:10:27 +0000 (0:00:00.932) 0:00:04.295 ********** 2026-04-05 01:13:45.933943 | orchestrator | included: /ansible/roles/grafana/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 01:13:45.933960 | orchestrator | 2026-04-05 01:13:45.933976 | orchestrator | TASK [service-cert-copy : grafana | Copying over extra CA certificates] ******** 2026-04-05 01:13:45.933992 | orchestrator | Sunday 05 April 2026 01:10:27 +0000 (0:00:00.598) 0:00:04.893 ********** 2026-04-05 01:13:45.934157 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-04-05 01:13:45.934209 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-04-05 01:13:45.934228 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-04-05 01:13:45.934240 | orchestrator | 2026-04-05 01:13:45.934251 | orchestrator | TASK [service-cert-copy : grafana | Copying over backend internal TLS certificate] *** 2026-04-05 01:13:45.934262 | orchestrator | Sunday 05 April 2026 01:10:29 +0000 (0:00:02.005) 0:00:06.899 ********** 2026-04-05 01:13:45.934273 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-04-05 01:13:45.934293 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-04-05 01:13:45.934305 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:13:45.934316 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:13:45.934328 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-04-05 01:13:45.934358 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:13:45.934376 | orchestrator | 2026-04-05 01:13:45.934395 | orchestrator | TASK [service-cert-copy : grafana | Copying over backend internal TLS key] ***** 2026-04-05 01:13:45.934503 | orchestrator | Sunday 05 April 202026-04-05 01:13:45 | INFO  | Task 7e9f5de7-e495-4e99-84cd-3e1d33040039 is in state SUCCESS 2026-04-05 01:13:45.934530 | orchestrator | 26 01:10:30 +0000 (0:00:00.465) 0:00:07.364 ********** 2026-04-05 01:13:45.934552 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-04-05 01:13:45.934566 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:13:45.934577 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-04-05 01:13:45.934589 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:13:45.934599 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-04-05 01:13:45.934611 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:13:45.934621 | orchestrator | 2026-04-05 01:13:45.934633 | orchestrator | TASK [grafana : Copying over config.json files] ******************************** 2026-04-05 01:13:45.934643 | orchestrator | Sunday 05 April 2026 01:10:31 +0000 (0:00:00.594) 0:00:07.958 ********** 2026-04-05 01:13:45.934662 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-04-05 01:13:45.934674 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-04-05 01:13:45.934746 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-04-05 01:13:45.934770 | orchestrator | 2026-04-05 01:13:45.934788 | orchestrator | TASK [grafana : Copying over grafana.ini] ************************************** 2026-04-05 01:13:45.934807 | orchestrator | Sunday 05 April 2026 01:10:32 +0000 (0:00:01.502) 0:00:09.461 ********** 2026-04-05 01:13:45.934825 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-04-05 01:13:45.934843 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-04-05 01:13:45.934862 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-04-05 01:13:45.934882 | orchestrator | 2026-04-05 01:13:45.934900 | orchestrator | TASK [grafana : Copying over extra configuration file] ************************* 2026-04-05 01:13:45.934928 | orchestrator | Sunday 05 April 2026 01:10:33 +0000 (0:00:01.347) 0:00:10.808 ********** 2026-04-05 01:13:45.934948 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:13:45.934965 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:13:45.934982 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:13:45.935020 | orchestrator | 2026-04-05 01:13:45.935039 | orchestrator | TASK [grafana : Configuring Prometheus as data source for Grafana] ************* 2026-04-05 01:13:45.935057 | orchestrator | Sunday 05 April 2026 01:10:34 +0000 (0:00:00.396) 0:00:11.204 ********** 2026-04-05 01:13:45.935072 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2026-04-05 01:13:45.935090 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2026-04-05 01:13:45.935107 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2026-04-05 01:13:45.935126 | orchestrator | 2026-04-05 01:13:45.935145 | orchestrator | TASK [grafana : Configuring dashboards provisioning] *************************** 2026-04-05 01:13:45.935164 | orchestrator | Sunday 05 April 2026 01:10:35 +0000 (0:00:01.577) 0:00:12.782 ********** 2026-04-05 01:13:45.935182 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2026-04-05 01:13:45.935201 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2026-04-05 01:13:45.935212 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2026-04-05 01:13:45.935223 | orchestrator | 2026-04-05 01:13:45.935234 | orchestrator | TASK [grafana : Find custom grafana dashboards] ******************************** 2026-04-05 01:13:45.935244 | orchestrator | Sunday 05 April 2026 01:10:37 +0000 (0:00:01.212) 0:00:13.994 ********** 2026-04-05 01:13:45.935255 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-05 01:13:45.935265 | orchestrator | 2026-04-05 01:13:45.935337 | orchestrator | TASK [grafana : Find templated grafana dashboards] ***************************** 2026-04-05 01:13:45.935361 | orchestrator | Sunday 05 April 2026 01:10:38 +0000 (0:00:01.094) 0:00:15.089 ********** 2026-04-05 01:13:45.935379 | orchestrator | [WARNING]: Skipped '/etc/kolla/grafana/dashboards' path due to this access 2026-04-05 01:13:45.935398 | orchestrator | issue: '/etc/kolla/grafana/dashboards' is not a directory 2026-04-05 01:13:45.935416 | orchestrator | ok: [testbed-node-0] 2026-04-05 01:13:45.935433 | orchestrator | ok: [testbed-node-1] 2026-04-05 01:13:45.935451 | orchestrator | ok: [testbed-node-2] 2026-04-05 01:13:45.935551 | orchestrator | 2026-04-05 01:13:45.935576 | orchestrator | TASK [grafana : Prune templated Grafana dashboards] **************************** 2026-04-05 01:13:45.935594 | orchestrator | Sunday 05 April 2026 01:10:38 +0000 (0:00:00.731) 0:00:15.820 ********** 2026-04-05 01:13:45.935612 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:13:45.935631 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:13:45.935650 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:13:45.935669 | orchestrator | 2026-04-05 01:13:45.935688 | orchestrator | TASK [grafana : Copying over custom dashboards] ******************************** 2026-04-05 01:13:45.935707 | orchestrator | Sunday 05 April 2026 01:10:39 +0000 (0:00:00.370) 0:00:16.191 ********** 2026-04-05 01:13:45.935727 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 121701, 'inode': 1317914, 'dev': 126, 'nlink': 1, 'atime': 1775347349.0, 'mtime': 1775347349.0, 'ctime': 1775348303.7612717, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-05 01:13:45.935743 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 121701, 'inode': 1317914, 'dev': 126, 'nlink': 1, 'atime': 1775347349.0, 'mtime': 1775347349.0, 'ctime': 1775348303.7612717, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-05 01:13:45.935776 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 121701, 'inode': 1317914, 'dev': 126, 'nlink': 1, 'atime': 1775347349.0, 'mtime': 1775347349.0, 'ctime': 1775348303.7612717, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-05 01:13:45.935788 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/cephfsdashboard.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfsdashboard.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 143913, 'inode': 1317945, 'dev': 126, 'nlink': 1, 'atime': 1775347349.0, 'mtime': 1775347349.0, 'ctime': 1775348303.7687492, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-05 01:13:45.935800 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/cephfsdashboard.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfsdashboard.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 143913, 'inode': 1317945, 'dev': 126, 'nlink': 1, 'atime': 1775347349.0, 'mtime': 1775347349.0, 'ctime': 1775348303.7687492, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-05 01:13:45.935869 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/cephfsdashboard.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfsdashboard.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 143913, 'inode': 1317945, 'dev': 126, 'nlink': 1, 'atime': 1775347349.0, 'mtime': 1775347349.0, 'ctime': 1775348303.7687492, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-05 01:13:45.935883 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26019, 'inode': 1317986, 'dev': 126, 'nlink': 1, 'atime': 1775347349.0, 'mtime': 1775347349.0, 'ctime': 1775348303.778419, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-05 01:13:45.935894 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26019, 'inode': 1317986, 'dev': 126, 'nlink': 1, 'atime': 1775347349.0, 'mtime': 1775347349.0, 'ctime': 1775348303.778419, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-05 01:13:45.935914 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26019, 'inode': 1317986, 'dev': 126, 'nlink': 1, 'atime': 1775347349.0, 'mtime': 1775347349.0, 'ctime': 1775348303.778419, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-05 01:13:45.935931 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1317942, 'dev': 126, 'nlink': 1, 'atime': 1775347349.0, 'mtime': 1775347349.0, 'ctime': 1775348303.7657492, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-05 01:13:45.935943 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1317942, 'dev': 126, 'nlink': 1, 'atime': 1775347349.0, 'mtime': 1775347349.0, 'ctime': 1775348303.7657492, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-05 01:13:45.935987 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1317942, 'dev': 126, 'nlink': 1, 'atime': 1775347349.0, 'mtime': 1775347349.0, 'ctime': 1775348303.7657492, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-05 01:13:45.936001 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 170293, 'inode': 1317988, 'dev': 126, 'nlink': 1, 'atime': 1775347349.0, 'mtime': 1775347349.0, 'ctime': 1775348303.7802472, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-05 01:13:45.936012 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 170293, 'inode': 1317988, 'dev': 126, 'nlink': 1, 'atime': 1775347349.0, 'mtime': 1775347349.0, 'ctime': 1775348303.7802472, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-05 01:13:45.936031 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 170293, 'inode': 1317988, 'dev': 126, 'nlink': 1, 'atime': 1775347349.0, 'mtime': 1775347349.0, 'ctime': 1775348303.7802472, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-05 01:13:45.936048 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph-nvmeof-performance.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-nvmeof-performance.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 33297, 'inode': 1317930, 'dev': 126, 'nlink': 1, 'atime': 1775347349.0, 'mtime': 1775347349.0, 'ctime': 1775348303.7632992, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-05 01:13:45.936060 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph-nvmeof-performance.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-nvmeof-performance.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 33297, 'inode': 1317930, 'dev': 126, 'nlink': 1, 'atime': 1775347349.0, 'mtime': 1775347349.0, 'ctime': 1775348303.7632992, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-05 01:13:45.936102 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph-nvmeof-performance.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-nvmeof-performance.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 33297, 'inode': 1317930, 'dev': 126, 'nlink': 1, 'atime': 1775347349.0, 'mtime': 1775347349.0, 'ctime': 1775348303.7632992, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-05 01:13:45.936115 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26346, 'inode': 1317958, 'dev': 126, 'nlink': 1, 'atime': 1775347349.0, 'mtime': 1775347349.0, 'ctime': 1775348303.7730196, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-05 01:13:45.936127 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26346, 'inode': 1317958, 'dev': 126, 'nlink': 1, 'atime': 1775347349.0, 'mtime': 1775347349.0, 'ctime': 1775348303.7730196, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-05 01:13:45.936157 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26346, 'inode': 1317958, 'dev': 126, 'nlink': 1, 'atime': 1775347349.0, 'mtime': 1775347349.0, 'ctime': 1775348303.7730196, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-05 01:13:45.936186 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 46110, 'inode': 1317979, 'dev': 126, 'nlink': 1, 'atime': 1775347349.0, 'mtime': 1775347349.0, 'ctime': 1775348303.7767682, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-05 01:13:45.936207 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 46110, 'inode': 1317979, 'dev': 126, 'nlink': 1, 'atime': 1775347349.0, 'mtime': 1775347349.0, 'ctime': 1775348303.7767682, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-05 01:13:45.936227 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 46110, 'inode': 1317979, 'dev': 126, 'nlink': 1, 'atime': 1775347349.0, 'mtime': 1775347349.0, 'ctime': 1775348303.7767682, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-05 01:13:45.936292 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1317907, 'dev': 126, 'nlink': 1, 'atime': 1775347349.0, 'mtime': 1775347349.0, 'ctime': 1775348303.7589881, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-05 01:13:45.936313 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1317907, 'dev': 126, 'nlink': 1, 'atime': 1775347349.0, 'mtime': 1775347349.0, 'ctime': 1775348303.7589881, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-05 01:13:45.936341 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1317907, 'dev': 126, 'nlink': 1, 'atime': 1775347349.0, 'mtime': 1775347349.0, 'ctime': 1775348303.7589881, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-05 01:13:45.936359 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1317923, 'dev': 126, 'nlink': 1, 'atime': 1775347349.0, 'mtime': 1775347349.0, 'ctime': 1775348303.7632992, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-05 01:13:45.936386 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1317923, 'dev': 126, 'nlink': 1, 'atime': 1775347349.0, 'mtime': 1775347349.0, 'ctime': 1775348303.7632992, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-05 01:13:45.936406 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1317923, 'dev': 126, 'nlink': 1, 'atime': 1775347349.0, 'mtime': 1775347349.0, 'ctime': 1775348303.7632992, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-05 01:13:45.936626 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1317944, 'dev': 126, 'nlink': 1, 'atime': 1775347349.0, 'mtime': 1775347349.0, 'ctime': 1775348303.7667494, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-05 01:13:45.936673 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1317944, 'dev': 126, 'nlink': 1, 'atime': 1775347349.0, 'mtime': 1775347349.0, 'ctime': 1775348303.7667494, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-05 01:13:45.936703 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19231, 'inode': 1317963, 'dev': 126, 'nlink': 1, 'atime': 1775347349.0, 'mtime': 1775347349.0, 'ctime': 1775348303.7746384, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-05 01:13:45.936715 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1317944, 'dev': 126, 'nlink': 1, 'atime': 1775347349.0, 'mtime': 1775347349.0, 'ctime': 1775348303.7667494, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-05 01:13:45.936734 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19231, 'inode': 1317963, 'dev': 126, 'nlink': 1, 'atime': 1775347349.0, 'mtime': 1775347349.0, 'ctime': 1775348303.7746384, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-05 01:13:45.936745 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13320, 'inode': 1317983, 'dev': 126, 'nlink': 1, 'atime': 1775347349.0, 'mtime': 1775347349.0, 'ctime': 1775348303.7777495, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-05 01:13:45.936814 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13320, 'inode': 1317983, 'dev': 126, 'nlink': 1, 'atime': 1775347349.0, 'mtime': 1775347349.0, 'ctime': 1775348303.7777495, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-05 01:13:45.936827 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19231, 'inode': 1317963, 'dev': 126, 'nlink': 1, 'atime': 1775347349.0, 'mtime': 1775347349.0, 'ctime': 1775348303.7746384, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-05 01:13:45.936839 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1317936, 'dev': 126, 'nlink': 1, 'atime': 1775347349.0, 'mtime': 1775347349.0, 'ctime': 1775348303.7657418, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-05 01:13:45.936857 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1317936, 'dev': 126, 'nlink': 1, 'atime': 1775347349.0, 'mtime': 1775347349.0, 'ctime': 1775348303.7657418, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-05 01:13:45.936869 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13320, 'inode': 1317983, 'dev': 126, 'nlink': 1, 'atime': 1775347349.0, 'mtime': 1775347349.0, 'ctime': 1775348303.7777495, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-05 01:13:45.936885 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 20042, 'inode': 1317973, 'dev': 126, 'nlink': 1, 'atime': 1775347349.0, 'mtime': 1775347349.0, 'ctime': 1775348303.7767682, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-05 01:13:45.936896 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 20042, 'inode': 1317973, 'dev': 126, 'nlink': 1, 'atime': 1775347349.0, 'mtime': 1775347349.0, 'ctime': 1775348303.7767682, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-05 01:13:45.936944 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1317936, 'dev': 126, 'nlink': 1, 'atime': 1775347349.0, 'mtime': 1775347349.0, 'ctime': 1775348303.7657418, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-05 01:13:45.936957 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/smb-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/smb-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29877, 'inode': 1317998, 'dev': 126, 'nlink': 1, 'atime': 1775347349.0, 'mtime': 1775347349.0, 'ctime': 1775348303.7807941, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-05 01:13:45.936980 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/smb-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/smb-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29877, 'inode': 1317998, 'dev': 126, 'nlink': 1, 'atime': 1775347349.0, 'mtime': 1775347349.0, 'ctime': 1775348303.7807941, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-05 01:13:45.936992 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 20042, 'inode': 1317973, 'dev': 126, 'nlink': 1, 'atime': 1775347349.0, 'mtime': 1775347349.0, 'ctime': 1775348303.7767682, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-05 01:13:45.937008 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38375, 'inode': 1317962, 'dev': 126, 'nlink': 1, 'atime': 1775347349.0, 'mtime': 1775347349.0, 'ctime': 1775348303.7730196, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-05 01:13:45.937020 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38375, 'inode': 1317962, 'dev': 126, 'nlink': 1, 'atime': 1775347349.0, 'mtime': 1775347349.0, 'ctime': 1775348303.7730196, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-05 01:13:45.937061 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/smb-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/smb-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29877, 'inode': 1317998, 'dev': 126, 'nlink': 1, 'atime': 1775347349.0, 'mtime': 1775347349.0, 'ctime': 1775348303.7807941, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-05 01:13:45.937075 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 63043, 'inode': 1317956, 'dev': 126, 'nlink': 1, 'atime': 1775347349.0, 'mtime': 1775347349.0, 'ctime': 1775348303.7717826, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-05 01:13:45.937093 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 63043, 'inode': 1317956, 'dev': 126, 'nlink': 1, 'atime': 1775347349.0, 'mtime': 1775347349.0, 'ctime': 1775348303.7717826, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-05 01:13:45.937105 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38375, 'inode': 1317962, 'dev': 126, 'nlink': 1, 'atime': 1775347349.0, 'mtime': 1775347349.0, 'ctime': 1775348303.7730196, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-05 01:13:45.937120 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27387, 'inode': 1317952, 'dev': 126, 'nlink': 1, 'atime': 1775347349.0, 'mtime': 1775347349.0, 'ctime': 1775348303.7708561, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-05 01:13:45.937130 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27387, 'inode': 1317952, 'dev': 126, 'nlink': 1, 'atime': 1775347349.0, 'mtime': 1775347349.0, 'ctime': 1775348303.7708561, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-05 01:13:45.937140 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 63043, 'inode': 1317956, 'dev': 126, 'nlink': 1, 'atime': 1775347349.0, 'mtime': 1775347349.0, 'ctime': 1775348303.7717826, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-05 01:13:45.937156 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49016, 'inode': 1317968, 'dev': 126, 'nlink': 1, 'atime': 1775347349.0, 'mtime': 1775347349.0, 'ctime': 1775348303.7758577, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-05 01:13:45.937172 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49016, 'inode': 1317968, 'dev': 126, 'nlink': 1, 'atime': 1775347349.0, 'mtime': 1775347349.0, 'ctime': 1775348303.7758577, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-05 01:13:45.937182 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27387, 'inode': 1317952, 'dev': 126, 'nlink': 1, 'atime': 1775347349.0, 'mtime': 1775347349.0, 'ctime': 1775348303.7708561, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-05 01:13:45.937192 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 43303, 'inode': 1317949, 'dev': 126, 'nlink': 1, 'atime': 1775347349.0, 'mtime': 1775347349.0, 'ctime': 1775348303.7697494, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-05 01:13:45.937206 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 43303, 'inode': 1317949, 'dev': 126, 'nlink': 1, 'atime': 1775347349.0, 'mtime': 1775347349.0, 'ctime': 1775348303.7697494, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-05 01:13:45.937217 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16614, 'inode': 1317982, 'dev': 126, 'nlink': 1, 'atime': 1775347349.0, 'mtime': 1775347349.0, 'ctime': 1775348303.7776678, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-05 01:13:45.937236 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49016, 'inode': 1317968, 'dev': 126, 'nlink': 1, 'atime': 1775347349.0, 'mtime': 1775347349.0, 'ctime': 1775348303.7758577, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-05 01:13:45.937252 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16614, 'inode': 1317982, 'dev': 126, 'nlink': 1, 'atime': 1775347349.0, 'mtime': 1775347349.0, 'ctime': 1775348303.7776678, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-05 01:13:45.937263 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph-nvmeof.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-nvmeof.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 52667, 'inode': 1317931, 'dev': 126, 'nlink': 1, 'atime': 1775347349.0, 'mtime': 1775347349.0, 'ctime': 1775348303.764395, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-05 01:13:45.937273 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 43303, 'inode': 1317949, 'dev': 126, 'nlink': 1, 'atime': 1775347349.0, 'mtime': 1775347349.0, 'ctime': 1775348303.7697494, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-05 01:13:45.937287 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph-nvmeof.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-nvmeof.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 52667, 'inode': 1317931, 'dev': 126, 'nlink': 1, 'atime': 1775347349.0, 'mtime': 1775347349.0, 'ctime': 1775348303.764395, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-05 01:13:45.937297 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1318087, 'dev': 126, 'nlink': 1, 'atime': 1775347349.0, 'mtime': 1775347349.0, 'ctime': 1775348303.8099923, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-05 01:13:45.937313 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16614, 'inode': 1317982, 'dev': 126, 'nlink': 1, 'atime': 1775347349.0, 'mtime': 1775347349.0, 'ctime': 1775348303.7776678, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-05 01:13:45.937330 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1318087, 'dev': 126, 'nlink': 1, 'atime': 1775347349.0, 'mtime': 1775347349.0, 'ctime': 1775348303.8099923, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-05 01:13:45.937340 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1318021, 'dev': 126, 'nlink': 1, 'atime': 1775347349.0, 'mtime': 1775347349.0, 'ctime': 1775348303.7905674, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-05 01:13:45.937350 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1318021, 'dev': 126, 'nlink': 1, 'atime': 1775347349.0, 'mtime': 1775347349.0, 'ctime': 1775348303.7905674, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-05 01:13:45.937364 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph-nvmeof.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-nvmeof.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 52667, 'inode': 1317931, 'dev': 126, 'nlink': 1, 'atime': 1775347349.0, 'mtime': 1775347349.0, 'ctime': 1775348303.764395, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-05 01:13:45.937374 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1318010, 'dev': 126, 'nlink': 1, 'atime': 1775347349.0, 'mtime': 1775347349.0, 'ctime': 1775348303.7827497, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-05 01:13:45.937385 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1318010, 'dev': 126, 'nlink': 1, 'atime': 1775347349.0, 'mtime': 1775347349.0, 'ctime': 1775348303.7827497, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-05 01:13:45.937423 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1318087, 'dev': 126, 'nlink': 1, 'atime': 1775347349.0, 'mtime': 1775347349.0, 'ctime': 1775348303.8099923, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-05 01:13:45.937436 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15767, 'inode': 1318039, 'dev': 126, 'nlink': 1, 'atime': 1775347349.0, 'mtime': 1775347349.0, 'ctime': 1775348303.7927496, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-05 01:13:45.937456 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15767, 'inode': 1318039, 'dev': 126, 'nlink': 1, 'atime': 1775347349.0, 'mtime': 1775347349.0, 'ctime': 1775348303.7927496, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-05 01:13:45.937467 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1318021, 'dev': 126, 'nlink': 1, 'atime': 1775347349.0, 'mtime': 1775347349.0, 'ctime': 1775348303.7905674, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-05 01:13:45.937503 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1318001, 'dev': 126, 'nlink': 1, 'atime': 1775347349.0, 'mtime': 1775347349.0, 'ctime': 1775348303.7814157, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-05 01:13:45.937514 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1318001, 'dev': 126, 'nlink': 1, 'atime': 1775347349.0, 'mtime': 1775347349.0, 'ctime': 1775348303.7814157, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-05 01:13:45.937537 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1318010, 'dev': 126, 'nlink': 1, 'atime': 1775347349.0, 'mtime': 1775347349.0, 'ctime': 1775348303.7827497, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-05 01:13:45.937548 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1318059, 'dev': 126, 'nlink': 1, 'atime': 1775347349.0, 'mtime': 1775347349.0, 'ctime': 1775348303.7996395, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-05 01:13:45.937558 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1318059, 'dev': 126, 'nlink': 1, 'atime': 1775347349.0, 'mtime': 1775347349.0, 'ctime': 1775348303.7996395, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-05 01:13:45.937568 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15767, 'inode': 1318039, 'dev': 126, 'nlink': 1, 'atime': 1775347349.0, 'mtime': 1775347349.0, 'ctime': 1775348303.7927496, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-05 01:13:45.937582 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1318041, 'dev': 126, 'nlink': 1, 'atime': 1775347349.0, 'mtime': 1775347349.0, 'ctime': 1775348303.7977498, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-05 01:13:45.937593 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1318041, 'dev': 126, 'nlink': 1, 'atime': 1775347349.0, 'mtime': 1775347349.0, 'ctime': 1775348303.7977498, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-05 01:13:45.937615 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1318001, 'dev': 126, 'nlink': 1, 'atime': 1775347349.0, 'mtime': 1775347349.0, 'ctime': 1775348303.7814157, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-05 01:13:45.937625 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22303, 'inode': 1318062, 'dev': 126, 'nlink': 1, 'atime': 1775347349.0, 'mtime': 1775347349.0, 'ctime': 1775348303.8004653, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-05 01:13:45.937635 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22303, 'inode': 1318062, 'dev': 126, 'nlink': 1, 'atime': 1775347349.0, 'mtime': 1775347349.0, 'ctime': 1775348303.8004653, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-05 01:13:45.937646 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1318059, 'dev': 126, 'nlink': 1, 'atime': 1775347349.0, 'mtime': 1775347349.0, 'ctime': 1775348303.7996395, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-05 01:13:45.937663 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1318084, 'dev': 126, 'nlink': 1, 'atime': 1775347349.0, 'mtime': 1775347349.0, 'ctime': 1775348303.8080842, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-05 01:13:45.937673 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1318084, 'dev': 126, 'nlink': 1, 'atime': 1775347349.0, 'mtime': 1775347349.0, 'ctime': 1775348303.8080842, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-05 01:13:45.937694 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1318041, 'dev': 126, 'nlink': 1, 'atime': 1775347349.0, 'mtime': 1775347349.0, 'ctime': 1775348303.7977498, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-05 01:13:45.937705 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21194, 'inode': 1318058, 'dev': 126, 'nlink': 1, 'atime': 1775347349.0, 'mtime': 1775347349.0, 'ctime': 1775348303.79875, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-05 01:13:45.937715 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21194, 'inode': 1318058, 'dev': 126, 'nlink': 1, 'atime': 1775347349.0, 'mtime': 1775347349.0, 'ctime': 1775348303.79875, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-05 01:13:45.937725 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22303, 'inode': 1318062, 'dev': 126, 'nlink': 1, 'atime': 1775347349.0, 'mtime': 1775347349.0, 'ctime': 1775348303.8004653, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-05 01:13:45.937739 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1318035, 'dev': 126, 'nlink': 1, 'atime': 1775347349.0, 'mtime': 1775347349.0, 'ctime': 1775348303.79137, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-05 01:13:45.937750 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1318035, 'dev': 126, 'nlink': 1, 'atime': 1775347349.0, 'mtime': 1775347349.0, 'ctime': 1775348303.79137, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-05 01:13:45.937772 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1318084, 'dev': 126, 'nlink': 1, 'atime': 1775347349.0, 'mtime': 1775347349.0, 'ctime': 1775348303.8080842, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-05 01:13:45.937783 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1318016, 'dev': 126, 'nlink': 1, 'atime': 1775347349.0, 'mtime': 1775347349.0, 'ctime': 1775348303.7856715, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-05 01:13:45.937793 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1318016, 'dev': 126, 'nlink': 1, 'atime': 1775347349.0, 'mtime': 1775347349.0, 'ctime': 1775348303.7856715, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-05 01:13:45.937803 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21194, 'inode': 1318058, 'dev': 126, 'nlink': 1, 'atime': 1775347349.0, 'mtime': 1775347349.0, 'ctime': 1775348303.79875, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-05 01:13:45.937813 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1318032, 'dev': 126, 'nlink': 1, 'atime': 1775347349.0, 'mtime': 1775347349.0, 'ctime': 1775348303.79137, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-05 01:13:45.937827 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1318032, 'dev': 126, 'nlink': 1, 'atime': 1775347349.0, 'mtime': 1775347349.0, 'ctime': 1775348303.79137, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-05 01:13:45.937843 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1318011, 'dev': 126, 'nlink': 1, 'atime': 1775347349.0, 'mtime': 1775347349.0, 'ctime': 1775348303.7847483, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-05 01:13:45.937859 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1318035, 'dev': 126, 'nlink': 1, 'atime': 1775347349.0, 'mtime': 1775347349.0, 'ctime': 1775348303.79137, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-05 01:13:45.937869 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1318011, 'dev': 126, 'nlink': 1, 'atime': 1775347349.0, 'mtime': 1775347349.0, 'ctime': 1775348303.7847483, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-05 01:13:45.937879 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15957, 'inode': 1318036, 'dev': 126, 'nlink': 1, 'atime': 1775347349.0, 'mtime': 1775347349.0, 'ctime': 1775348303.7918913, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-05 01:13:45.937889 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1318016, 'dev': 126, 'nlink': 1, 'atime': 1775347349.0, 'mtime': 1775347349.0, 'ctime': 1775348303.7856715, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-05 01:13:45.937903 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15957, 'inode': 1318036, 'dev': 126, 'nlink': 1, 'atime': 1775347349.0, 'mtime': 1775347349.0, 'ctime': 1775348303.7918913, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-05 01:13:45.937919 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1318076, 'dev': 126, 'nlink': 1, 'atime': 1775347349.0, 'mtime': 1775347349.0, 'ctime': 1775348303.80702, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-05 01:13:45.937935 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1318032, 'dev': 126, 'nlink': 1, 'atime': 1775347349.0, 'mtime': 1775347349.0, 'ctime': 1775348303.79137, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-05 01:13:45.937945 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1318076, 'dev': 126, 'nlink': 1, 'atime': 1775347349.0, 'mtime': 1775347349.0, 'ctime': 1775348303.80702, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-05 01:13:45.937955 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1318070, 'dev': 126, 'nlink': 1, 'atime': 1775347349.0, 'mtime': 1775347349.0, 'ctime': 1775348303.8037498, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-05 01:13:45.937965 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1318011, 'dev': 126, 'nlink': 1, 'atime': 1775347349.0, 'mtime': 1775347349.0, 'ctime': 1775348303.7847483, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-05 01:13:45.937991 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1318070, 'dev': 126, 'nlink': 1, 'atime': 1775347349.0, 'mtime': 1775347349.0, 'ctime': 1775348303.8037498, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-05 01:13:45.938006 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1318004, 'dev': 126, 'nlink': 1, 'atime': 1775347349.0, 'mtime': 1775347349.0, 'ctime': 1775348303.7817802, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-05 01:13:45.938053 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15957, 'inode': 1318036, 'dev': 126, 'nlink': 1, 'atime': 1775347349.0, 'mtime': 1775347349.0, 'ctime': 1775348303.7918913, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-05 01:13:45.938066 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1318004, 'dev': 126, 'nlink': 1, 'atime': 1775347349.0, 'mtime': 1775347349.0, 'ctime': 1775348303.7817802, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-05 01:13:45.938076 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1318006, 'dev': 126, 'nlink': 1, 'atime': 1775347349.0, 'mtime': 1775347349.0, 'ctime': 1775348303.7825334, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-05 01:13:45.938086 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1318006, 'dev': 126, 'nlink': 1, 'atime': 1775347349.0, 'mtime': 1775347349.0, 'ctime': 1775348303.7825334, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-05 01:13:45.938101 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1318076, 'dev': 126, 'nlink': 1, 'atime': 1775347349.0, 'mtime': 1775347349.0, 'ctime': 1775348303.80702, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-05 01:13:45.938117 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1318055, 'dev': 126, 'nlink': 1, 'atime': 1775347349.0, 'mtime': 1775347349.0, 'ctime': 1775348303.79875, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-05 01:13:45.938133 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1318055, 'dev': 126, 'nlink': 1, 'atime': 1775347349.0, 'mtime': 1775347349.0, 'ctime': 1775348303.79875, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-05 01:13:45.938143 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1318070, 'dev': 126, 'nlink': 1, 'atime': 1775347349.0, 'mtime': 1775347349.0, 'ctime': 1775348303.8037498, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-05 01:13:45.938153 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21951, 'inode': 1318066, 'dev': 126, 'nlink': 1, 'atime': 1775347349.0, 'mtime': 1775347349.0, 'ctime': 1775348303.8008006, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-05 01:13:45.938163 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1318004, 'dev': 126, 'nlink': 1, 'atime': 1775347349.0, 'mtime': 1775347349.0, 'ctime': 1775348303.7817802, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-05 01:13:45.938177 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21951, 'inode': 1318066, 'dev': 126, 'nlink': 1, 'atime': 1775347349.0, 'mtime': 1775347349.0, 'ctime': 1775348303.8008006, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-05 01:13:45.938195 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1318006, 'dev': 126, 'nlink': 1, 'atime': 1775347349.0, 'mtime': 1775347349.0, 'ctime': 1775348303.7825334, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-05 01:13:45.938210 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1318055, 'dev': 126, 'nlink': 1, 'atime': 1775347349.0, 'mtime': 1775347349.0, 'ctime': 1775348303.79875, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-05 01:13:45.938221 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21951, 'inode': 1318066, 'dev': 126, 'nlink': 1, 'atime': 1775347349.0, 'mtime': 1775347349.0, 'ctime': 1775348303.8008006, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-05 01:13:45.938231 | orchestrator | 2026-04-05 01:13:45.938242 | orchestrator | TASK [grafana : Check grafana containers] ************************************** 2026-04-05 01:13:45.938253 | orchestrator | Sunday 05 April 2026 01:11:24 +0000 (0:00:45.383) 0:01:01.574 ********** 2026-04-05 01:13:45.938263 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-04-05 01:13:45.938273 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-04-05 01:13:45.938297 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-04-05 01:13:45.938307 | orchestrator | 2026-04-05 01:13:45.938317 | orchestrator | TASK [grafana : Creating grafana database] ************************************* 2026-04-05 01:13:45.938327 | orchestrator | Sunday 05 April 2026 01:11:25 +0000 (0:00:01.286) 0:01:02.861 ********** 2026-04-05 01:13:45.938336 | orchestrator | changed: [testbed-node-0] 2026-04-05 01:13:45.938346 | orchestrator | 2026-04-05 01:13:45.938356 | orchestrator | TASK [grafana : Creating grafana database user and setting permissions] ******** 2026-04-05 01:13:45.938365 | orchestrator | Sunday 05 April 2026 01:11:28 +0000 (0:00:02.967) 0:01:05.828 ********** 2026-04-05 01:13:45.938375 | orchestrator | changed: [testbed-node-0] 2026-04-05 01:13:45.938384 | orchestrator | 2026-04-05 01:13:45.938394 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2026-04-05 01:13:45.938403 | orchestrator | Sunday 05 April 2026 01:11:32 +0000 (0:00:03.169) 0:01:08.998 ********** 2026-04-05 01:13:45.938413 | orchestrator | 2026-04-05 01:13:45.938422 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2026-04-05 01:13:45.938432 | orchestrator | Sunday 05 April 2026 01:11:32 +0000 (0:00:00.063) 0:01:09.062 ********** 2026-04-05 01:13:45.938441 | orchestrator | 2026-04-05 01:13:45.938451 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2026-04-05 01:13:45.938460 | orchestrator | Sunday 05 April 2026 01:11:32 +0000 (0:00:00.064) 0:01:09.126 ********** 2026-04-05 01:13:45.938545 | orchestrator | 2026-04-05 01:13:45.938557 | orchestrator | RUNNING HANDLER [grafana : Restart first grafana container] ******************** 2026-04-05 01:13:45.938567 | orchestrator | Sunday 05 April 2026 01:11:32 +0000 (0:00:00.066) 0:01:09.193 ********** 2026-04-05 01:13:45.938576 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:13:45.938586 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:13:45.938595 | orchestrator | changed: [testbed-node-0] 2026-04-05 01:13:45.938605 | orchestrator | 2026-04-05 01:13:45.938620 | orchestrator | RUNNING HANDLER [grafana : Waiting for grafana to start on first node] ********* 2026-04-05 01:13:45.938630 | orchestrator | Sunday 05 April 2026 01:11:34 +0000 (0:00:02.271) 0:01:11.465 ********** 2026-04-05 01:13:45.938640 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:13:45.938649 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:13:45.938659 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (12 retries left). 2026-04-05 01:13:45.938669 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (11 retries left). 2026-04-05 01:13:45.938679 | orchestrator | ok: [testbed-node-0] 2026-04-05 01:13:45.938688 | orchestrator | 2026-04-05 01:13:45.938698 | orchestrator | RUNNING HANDLER [grafana : Restart remaining grafana containers] *************** 2026-04-05 01:13:45.938707 | orchestrator | Sunday 05 April 2026 01:12:01 +0000 (0:00:27.090) 0:01:38.556 ********** 2026-04-05 01:13:45.938716 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:13:45.938726 | orchestrator | changed: [testbed-node-1] 2026-04-05 01:13:45.938736 | orchestrator | changed: [testbed-node-2] 2026-04-05 01:13:45.938745 | orchestrator | 2026-04-05 01:13:45.938755 | orchestrator | TASK [grafana : Wait for grafana application ready] **************************** 2026-04-05 01:13:45.938764 | orchestrator | Sunday 05 April 2026 01:12:29 +0000 (0:00:27.490) 0:02:06.046 ********** 2026-04-05 01:13:45.938781 | orchestrator | ok: [testbed-node-0] 2026-04-05 01:13:45.938791 | orchestrator | 2026-04-05 01:13:45.938801 | orchestrator | TASK [grafana : Remove old grafana docker volume] ****************************** 2026-04-05 01:13:45.938810 | orchestrator | Sunday 05 April 2026 01:12:32 +0000 (0:00:02.913) 0:02:08.959 ********** 2026-04-05 01:13:45.938820 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:13:45.938829 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:13:45.938838 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:13:45.938848 | orchestrator | 2026-04-05 01:13:45.938857 | orchestrator | TASK [grafana : Enable grafana datasources] ************************************ 2026-04-05 01:13:45.938867 | orchestrator | Sunday 05 April 2026 01:12:32 +0000 (0:00:00.302) 0:02:09.262 ********** 2026-04-05 01:13:45.938877 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'influxdb', 'value': {'enabled': False, 'data': {'isDefault': True, 'database': 'telegraf', 'name': 'telegraf', 'type': 'influxdb', 'url': 'https://api-int.testbed.osism.xyz:8086', 'access': 'proxy', 'basicAuth': False}}})  2026-04-05 01:13:45.938888 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'enabled': True, 'data': {'name': 'opensearch', 'type': 'grafana-opensearch-datasource', 'access': 'proxy', 'url': 'https://api-int.testbed.osism.xyz:9200', 'jsonData': {'flavor': 'OpenSearch', 'database': 'flog-*', 'version': '2.11.1', 'timeField': '@timestamp', 'logLevelField': 'log_level'}}}}) 2026-04-05 01:13:45.938898 | orchestrator | 2026-04-05 01:13:45.938908 | orchestrator | TASK [grafana : Disable Getting Started panel] ********************************* 2026-04-05 01:13:45.938918 | orchestrator | Sunday 05 April 2026 01:12:35 +0000 (0:00:02.813) 0:02:12.076 ********** 2026-04-05 01:13:45.938927 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:13:45.938937 | orchestrator | 2026-04-05 01:13:45.938946 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-05 01:13:45.938956 | orchestrator | testbed-node-0 : ok=21  changed=12  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-04-05 01:13:45.938967 | orchestrator | testbed-node-1 : ok=14  changed=9  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-04-05 01:13:45.938982 | orchestrator | testbed-node-2 : ok=14  changed=9  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-04-05 01:13:45.938992 | orchestrator | 2026-04-05 01:13:45.939001 | orchestrator | 2026-04-05 01:13:45.939011 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-05 01:13:45.939021 | orchestrator | Sunday 05 April 2026 01:12:35 +0000 (0:00:00.264) 0:02:12.340 ********** 2026-04-05 01:13:45.939030 | orchestrator | =============================================================================== 2026-04-05 01:13:45.939040 | orchestrator | grafana : Copying over custom dashboards ------------------------------- 45.38s 2026-04-05 01:13:45.939049 | orchestrator | grafana : Restart remaining grafana containers ------------------------- 27.49s 2026-04-05 01:13:45.939059 | orchestrator | grafana : Waiting for grafana to start on first node ------------------- 27.09s 2026-04-05 01:13:45.939068 | orchestrator | grafana : Creating grafana database user and setting permissions -------- 3.17s 2026-04-05 01:13:45.939078 | orchestrator | grafana : Creating grafana database ------------------------------------- 2.97s 2026-04-05 01:13:45.939087 | orchestrator | grafana : Wait for grafana application ready ---------------------------- 2.91s 2026-04-05 01:13:45.939096 | orchestrator | grafana : Enable grafana datasources ------------------------------------ 2.81s 2026-04-05 01:13:45.939106 | orchestrator | grafana : Restart first grafana container ------------------------------- 2.27s 2026-04-05 01:13:45.939115 | orchestrator | service-cert-copy : grafana | Copying over extra CA certificates -------- 2.01s 2026-04-05 01:13:45.939125 | orchestrator | grafana : Configuring Prometheus as data source for Grafana ------------- 1.58s 2026-04-05 01:13:45.939134 | orchestrator | grafana : Copying over config.json files -------------------------------- 1.50s 2026-04-05 01:13:45.939149 | orchestrator | grafana : Copying over grafana.ini -------------------------------------- 1.35s 2026-04-05 01:13:45.939159 | orchestrator | grafana : Check grafana containers -------------------------------------- 1.29s 2026-04-05 01:13:45.939174 | orchestrator | grafana : Ensuring config directories exist ----------------------------- 1.22s 2026-04-05 01:13:45.939184 | orchestrator | grafana : Configuring dashboards provisioning --------------------------- 1.21s 2026-04-05 01:13:45.939194 | orchestrator | grafana : Find custom grafana dashboards -------------------------------- 1.09s 2026-04-05 01:13:45.939203 | orchestrator | grafana : Check if extra configuration file exists ---------------------- 0.93s 2026-04-05 01:13:45.939213 | orchestrator | grafana : Find templated grafana dashboards ----------------------------- 0.73s 2026-04-05 01:13:45.939222 | orchestrator | grafana : include_tasks ------------------------------------------------- 0.71s 2026-04-05 01:13:45.939232 | orchestrator | grafana : include_tasks ------------------------------------------------- 0.60s 2026-04-05 01:13:45.939242 | orchestrator | 2026-04-05 01:13:45 | INFO  | Task 69a1ceb6-baaf-4521-a0c5-ab7ea18d3ce9 is in state STARTED 2026-04-05 01:13:45.939251 | orchestrator | 2026-04-05 01:13:45 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:13:48.969392 | orchestrator | 2026-04-05 01:13:48 | INFO  | Task 69a1ceb6-baaf-4521-a0c5-ab7ea18d3ce9 is in state STARTED 2026-04-05 01:13:48.969541 | orchestrator | 2026-04-05 01:13:48 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:13:52.012446 | orchestrator | 2026-04-05 01:13:52 | INFO  | Task 69a1ceb6-baaf-4521-a0c5-ab7ea18d3ce9 is in state STARTED 2026-04-05 01:13:52.012657 | orchestrator | 2026-04-05 01:13:52 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:13:55.072363 | orchestrator | 2026-04-05 01:13:55 | INFO  | Task 69a1ceb6-baaf-4521-a0c5-ab7ea18d3ce9 is in state STARTED 2026-04-05 01:13:55.072443 | orchestrator | 2026-04-05 01:13:55 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:13:58.117758 | orchestrator | 2026-04-05 01:13:58 | INFO  | Task 69a1ceb6-baaf-4521-a0c5-ab7ea18d3ce9 is in state STARTED 2026-04-05 01:13:58.117858 | orchestrator | 2026-04-05 01:13:58 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:14:01.155931 | orchestrator | 2026-04-05 01:14:01 | INFO  | Task 69a1ceb6-baaf-4521-a0c5-ab7ea18d3ce9 is in state STARTED 2026-04-05 01:14:01.156040 | orchestrator | 2026-04-05 01:14:01 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:14:04.206845 | orchestrator | 2026-04-05 01:14:04 | INFO  | Task 69a1ceb6-baaf-4521-a0c5-ab7ea18d3ce9 is in state STARTED 2026-04-05 01:14:04.206931 | orchestrator | 2026-04-05 01:14:04 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:14:07.248905 | orchestrator | 2026-04-05 01:14:07 | INFO  | Task 69a1ceb6-baaf-4521-a0c5-ab7ea18d3ce9 is in state STARTED 2026-04-05 01:14:07.249003 | orchestrator | 2026-04-05 01:14:07 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:14:10.292766 | orchestrator | 2026-04-05 01:14:10 | INFO  | Task 69a1ceb6-baaf-4521-a0c5-ab7ea18d3ce9 is in state STARTED 2026-04-05 01:14:10.292882 | orchestrator | 2026-04-05 01:14:10 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:14:13.339073 | orchestrator | 2026-04-05 01:14:13 | INFO  | Task 69a1ceb6-baaf-4521-a0c5-ab7ea18d3ce9 is in state STARTED 2026-04-05 01:14:13.339974 | orchestrator | 2026-04-05 01:14:13 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:14:16.384907 | orchestrator | 2026-04-05 01:14:16 | INFO  | Task 69a1ceb6-baaf-4521-a0c5-ab7ea18d3ce9 is in state STARTED 2026-04-05 01:14:16.385002 | orchestrator | 2026-04-05 01:14:16 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:14:19.426779 | orchestrator | 2026-04-05 01:14:19 | INFO  | Task 69a1ceb6-baaf-4521-a0c5-ab7ea18d3ce9 is in state STARTED 2026-04-05 01:14:19.426915 | orchestrator | 2026-04-05 01:14:19 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:14:22.471301 | orchestrator | 2026-04-05 01:14:22 | INFO  | Task 69a1ceb6-baaf-4521-a0c5-ab7ea18d3ce9 is in state STARTED 2026-04-05 01:14:22.471402 | orchestrator | 2026-04-05 01:14:22 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:14:25.522827 | orchestrator | 2026-04-05 01:14:25 | INFO  | Task 69a1ceb6-baaf-4521-a0c5-ab7ea18d3ce9 is in state STARTED 2026-04-05 01:14:25.522903 | orchestrator | 2026-04-05 01:14:25 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:14:28.573397 | orchestrator | 2026-04-05 01:14:28 | INFO  | Task 69a1ceb6-baaf-4521-a0c5-ab7ea18d3ce9 is in state STARTED 2026-04-05 01:14:28.573561 | orchestrator | 2026-04-05 01:14:28 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:14:31.624763 | orchestrator | 2026-04-05 01:14:31 | INFO  | Task 69a1ceb6-baaf-4521-a0c5-ab7ea18d3ce9 is in state STARTED 2026-04-05 01:14:31.624874 | orchestrator | 2026-04-05 01:14:31 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:14:34.677165 | orchestrator | 2026-04-05 01:14:34 | INFO  | Task 69a1ceb6-baaf-4521-a0c5-ab7ea18d3ce9 is in state STARTED 2026-04-05 01:14:34.677268 | orchestrator | 2026-04-05 01:14:34 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:14:37.733172 | orchestrator | 2026-04-05 01:14:37 | INFO  | Task 69a1ceb6-baaf-4521-a0c5-ab7ea18d3ce9 is in state STARTED 2026-04-05 01:14:37.733296 | orchestrator | 2026-04-05 01:14:37 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:14:40.788337 | orchestrator | 2026-04-05 01:14:40 | INFO  | Task 69a1ceb6-baaf-4521-a0c5-ab7ea18d3ce9 is in state STARTED 2026-04-05 01:14:40.789370 | orchestrator | 2026-04-05 01:14:40 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:14:43.837896 | orchestrator | 2026-04-05 01:14:43 | INFO  | Task 69a1ceb6-baaf-4521-a0c5-ab7ea18d3ce9 is in state STARTED 2026-04-05 01:14:43.838004 | orchestrator | 2026-04-05 01:14:43 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:14:46.887594 | orchestrator | 2026-04-05 01:14:46 | INFO  | Task 69a1ceb6-baaf-4521-a0c5-ab7ea18d3ce9 is in state STARTED 2026-04-05 01:14:46.887732 | orchestrator | 2026-04-05 01:14:46 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:14:49.933621 | orchestrator | 2026-04-05 01:14:49 | INFO  | Task 69a1ceb6-baaf-4521-a0c5-ab7ea18d3ce9 is in state STARTED 2026-04-05 01:14:49.933749 | orchestrator | 2026-04-05 01:14:49 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:14:52.977262 | orchestrator | 2026-04-05 01:14:52 | INFO  | Task 69a1ceb6-baaf-4521-a0c5-ab7ea18d3ce9 is in state STARTED 2026-04-05 01:14:52.977367 | orchestrator | 2026-04-05 01:14:52 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:14:56.030893 | orchestrator | 2026-04-05 01:14:56 | INFO  | Task 69a1ceb6-baaf-4521-a0c5-ab7ea18d3ce9 is in state STARTED 2026-04-05 01:14:56.031004 | orchestrator | 2026-04-05 01:14:56 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:14:59.076842 | orchestrator | 2026-04-05 01:14:59 | INFO  | Task 69a1ceb6-baaf-4521-a0c5-ab7ea18d3ce9 is in state STARTED 2026-04-05 01:14:59.076943 | orchestrator | 2026-04-05 01:14:59 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:15:02.129960 | orchestrator | 2026-04-05 01:15:02 | INFO  | Task 69a1ceb6-baaf-4521-a0c5-ab7ea18d3ce9 is in state STARTED 2026-04-05 01:15:02.130113 | orchestrator | 2026-04-05 01:15:02 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:15:05.185256 | orchestrator | 2026-04-05 01:15:05 | INFO  | Task 69a1ceb6-baaf-4521-a0c5-ab7ea18d3ce9 is in state STARTED 2026-04-05 01:15:05.185338 | orchestrator | 2026-04-05 01:15:05 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:15:08.238205 | orchestrator | 2026-04-05 01:15:08 | INFO  | Task 69a1ceb6-baaf-4521-a0c5-ab7ea18d3ce9 is in state STARTED 2026-04-05 01:15:08.238311 | orchestrator | 2026-04-05 01:15:08 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:15:11.298728 | orchestrator | 2026-04-05 01:15:11 | INFO  | Task 69a1ceb6-baaf-4521-a0c5-ab7ea18d3ce9 is in state STARTED 2026-04-05 01:15:11.298835 | orchestrator | 2026-04-05 01:15:11 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:15:14.347081 | orchestrator | 2026-04-05 01:15:14 | INFO  | Task 69a1ceb6-baaf-4521-a0c5-ab7ea18d3ce9 is in state STARTED 2026-04-05 01:15:14.348118 | orchestrator | 2026-04-05 01:15:14 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:15:17.391776 | orchestrator | 2026-04-05 01:15:17 | INFO  | Task 69a1ceb6-baaf-4521-a0c5-ab7ea18d3ce9 is in state STARTED 2026-04-05 01:15:17.391897 | orchestrator | 2026-04-05 01:15:17 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:15:20.434185 | orchestrator | 2026-04-05 01:15:20 | INFO  | Task 69a1ceb6-baaf-4521-a0c5-ab7ea18d3ce9 is in state STARTED 2026-04-05 01:15:20.434288 | orchestrator | 2026-04-05 01:15:20 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:15:23.480738 | orchestrator | 2026-04-05 01:15:23 | INFO  | Task 69a1ceb6-baaf-4521-a0c5-ab7ea18d3ce9 is in state STARTED 2026-04-05 01:15:23.480843 | orchestrator | 2026-04-05 01:15:23 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:15:26.529864 | orchestrator | 2026-04-05 01:15:26 | INFO  | Task 69a1ceb6-baaf-4521-a0c5-ab7ea18d3ce9 is in state STARTED 2026-04-05 01:15:26.529963 | orchestrator | 2026-04-05 01:15:26 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:15:29.579753 | orchestrator | 2026-04-05 01:15:29 | INFO  | Task 69a1ceb6-baaf-4521-a0c5-ab7ea18d3ce9 is in state STARTED 2026-04-05 01:15:29.579855 | orchestrator | 2026-04-05 01:15:29 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:15:32.631661 | orchestrator | 2026-04-05 01:15:32 | INFO  | Task 69a1ceb6-baaf-4521-a0c5-ab7ea18d3ce9 is in state STARTED 2026-04-05 01:15:32.631753 | orchestrator | 2026-04-05 01:15:32 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:15:35.681063 | orchestrator | 2026-04-05 01:15:35 | INFO  | Task 69a1ceb6-baaf-4521-a0c5-ab7ea18d3ce9 is in state STARTED 2026-04-05 01:15:35.681171 | orchestrator | 2026-04-05 01:15:35 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:15:38.731914 | orchestrator | 2026-04-05 01:15:38 | INFO  | Task 69a1ceb6-baaf-4521-a0c5-ab7ea18d3ce9 is in state STARTED 2026-04-05 01:15:38.732006 | orchestrator | 2026-04-05 01:15:38 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:15:41.772934 | orchestrator | 2026-04-05 01:15:41 | INFO  | Task 69a1ceb6-baaf-4521-a0c5-ab7ea18d3ce9 is in state STARTED 2026-04-05 01:15:41.773024 | orchestrator | 2026-04-05 01:15:41 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:15:44.823932 | orchestrator | 2026-04-05 01:15:44 | INFO  | Task 69a1ceb6-baaf-4521-a0c5-ab7ea18d3ce9 is in state STARTED 2026-04-05 01:15:44.824032 | orchestrator | 2026-04-05 01:15:44 | INFO  | Wait 1 second(s) until the next check 2026-04-05 01:15:47.872195 | orchestrator | 2026-04-05 01:15:47 | INFO  | Task 69a1ceb6-baaf-4521-a0c5-ab7ea18d3ce9 is in state SUCCESS 2026-04-05 01:15:47.872314 | orchestrator | 2026-04-05 01:15:47 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-04-05 01:15:47.873581 | orchestrator | 2026-04-05 01:15:47.873678 | orchestrator | 2026-04-05 01:15:47.873691 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-05 01:15:47.873703 | orchestrator | 2026-04-05 01:15:47.873714 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-05 01:15:47.873725 | orchestrator | Sunday 05 April 2026 01:10:37 +0000 (0:00:00.398) 0:00:00.398 ********** 2026-04-05 01:15:47.873736 | orchestrator | ok: [testbed-node-0] 2026-04-05 01:15:47.873748 | orchestrator | ok: [testbed-node-1] 2026-04-05 01:15:47.873759 | orchestrator | ok: [testbed-node-2] 2026-04-05 01:15:47.873769 | orchestrator | 2026-04-05 01:15:47.873780 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-05 01:15:47.873791 | orchestrator | Sunday 05 April 2026 01:10:38 +0000 (0:00:00.334) 0:00:00.733 ********** 2026-04-05 01:15:47.873802 | orchestrator | ok: [testbed-node-0] => (item=enable_octavia_True) 2026-04-05 01:15:47.873813 | orchestrator | ok: [testbed-node-1] => (item=enable_octavia_True) 2026-04-05 01:15:47.873823 | orchestrator | ok: [testbed-node-2] => (item=enable_octavia_True) 2026-04-05 01:15:47.873834 | orchestrator | 2026-04-05 01:15:47.873845 | orchestrator | PLAY [Apply role octavia] ****************************************************** 2026-04-05 01:15:47.873856 | orchestrator | 2026-04-05 01:15:47.873866 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-04-05 01:15:47.873877 | orchestrator | Sunday 05 April 2026 01:10:38 +0000 (0:00:00.325) 0:00:01.058 ********** 2026-04-05 01:15:47.873888 | orchestrator | included: /ansible/roles/octavia/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 01:15:47.873899 | orchestrator | 2026-04-05 01:15:47.873909 | orchestrator | TASK [service-ks-register : octavia | Creating services] *********************** 2026-04-05 01:15:47.873920 | orchestrator | Sunday 05 April 2026 01:10:39 +0000 (0:00:00.898) 0:00:01.956 ********** 2026-04-05 01:15:47.873931 | orchestrator | changed: [testbed-node-0] => (item=octavia (load-balancer)) 2026-04-05 01:15:47.873942 | orchestrator | 2026-04-05 01:15:47.873954 | orchestrator | TASK [service-ks-register : octavia | Creating endpoints] ********************** 2026-04-05 01:15:47.873964 | orchestrator | Sunday 05 April 2026 01:10:44 +0000 (0:00:04.787) 0:00:06.744 ********** 2026-04-05 01:15:47.873991 | orchestrator | changed: [testbed-node-0] => (item=octavia -> https://api-int.testbed.osism.xyz:9876 -> internal) 2026-04-05 01:15:47.874234 | orchestrator | changed: [testbed-node-0] => (item=octavia -> https://api.testbed.osism.xyz:9876 -> public) 2026-04-05 01:15:47.874261 | orchestrator | 2026-04-05 01:15:47.874282 | orchestrator | TASK [service-ks-register : octavia | Creating projects] *********************** 2026-04-05 01:15:47.874301 | orchestrator | Sunday 05 April 2026 01:10:52 +0000 (0:00:08.226) 0:00:14.971 ********** 2026-04-05 01:15:47.874319 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-04-05 01:15:47.874339 | orchestrator | 2026-04-05 01:15:47.874362 | orchestrator | TASK [service-ks-register : octavia | Creating users] ************************** 2026-04-05 01:15:47.874383 | orchestrator | Sunday 05 April 2026 01:10:56 +0000 (0:00:04.031) 0:00:19.002 ********** 2026-04-05 01:15:47.874472 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service) 2026-04-05 01:15:47.874493 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service) 2026-04-05 01:15:47.874511 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-04-05 01:15:47.874529 | orchestrator | 2026-04-05 01:15:47.874548 | orchestrator | TASK [service-ks-register : octavia | Creating roles] ************************** 2026-04-05 01:15:47.874567 | orchestrator | Sunday 05 April 2026 01:11:06 +0000 (0:00:09.745) 0:00:28.748 ********** 2026-04-05 01:15:47.875722 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-04-05 01:15:47.875766 | orchestrator | 2026-04-05 01:15:47.875778 | orchestrator | TASK [service-ks-register : octavia | Granting user roles] ********************* 2026-04-05 01:15:47.875788 | orchestrator | Sunday 05 April 2026 01:11:09 +0000 (0:00:03.592) 0:00:32.341 ********** 2026-04-05 01:15:47.875798 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service -> admin) 2026-04-05 01:15:47.875829 | orchestrator | ok: [testbed-node-0] => (item=octavia -> service -> admin) 2026-04-05 01:15:47.875839 | orchestrator | 2026-04-05 01:15:47.875848 | orchestrator | TASK [octavia : Adding octavia related roles] ********************************** 2026-04-05 01:15:47.875858 | orchestrator | Sunday 05 April 2026 01:11:18 +0000 (0:00:08.983) 0:00:41.324 ********** 2026-04-05 01:15:47.875867 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_observer) 2026-04-05 01:15:47.875877 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_global_observer) 2026-04-05 01:15:47.875886 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_member) 2026-04-05 01:15:47.875896 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_admin) 2026-04-05 01:15:47.875905 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_quota_admin) 2026-04-05 01:15:47.875915 | orchestrator | 2026-04-05 01:15:47.875924 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-04-05 01:15:47.875934 | orchestrator | Sunday 05 April 2026 01:11:36 +0000 (0:00:17.251) 0:00:58.575 ********** 2026-04-05 01:15:47.875943 | orchestrator | included: /ansible/roles/octavia/tasks/prepare.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 01:15:47.875953 | orchestrator | 2026-04-05 01:15:47.875963 | orchestrator | TASK [octavia : Create amphora flavor] ***************************************** 2026-04-05 01:15:47.875973 | orchestrator | Sunday 05 April 2026 01:11:36 +0000 (0:00:00.756) 0:00:59.331 ********** 2026-04-05 01:15:47.875982 | orchestrator | changed: [testbed-node-0] 2026-04-05 01:15:47.875992 | orchestrator | 2026-04-05 01:15:47.876001 | orchestrator | TASK [octavia : Create nova keypair for amphora] ******************************* 2026-04-05 01:15:47.876010 | orchestrator | Sunday 05 April 2026 01:11:42 +0000 (0:00:05.268) 0:01:04.600 ********** 2026-04-05 01:15:47.876020 | orchestrator | changed: [testbed-node-0] 2026-04-05 01:15:47.876030 | orchestrator | 2026-04-05 01:15:47.876039 | orchestrator | TASK [octavia : Get service project id] **************************************** 2026-04-05 01:15:47.876089 | orchestrator | Sunday 05 April 2026 01:11:45 +0000 (0:00:03.558) 0:01:08.159 ********** 2026-04-05 01:15:47.876100 | orchestrator | ok: [testbed-node-0] 2026-04-05 01:15:47.876110 | orchestrator | 2026-04-05 01:15:47.876120 | orchestrator | TASK [octavia : Create security groups for octavia] **************************** 2026-04-05 01:15:47.876129 | orchestrator | Sunday 05 April 2026 01:11:49 +0000 (0:00:03.818) 0:01:11.978 ********** 2026-04-05 01:15:47.876139 | orchestrator | changed: [testbed-node-0] => (item=lb-mgmt-sec-grp) 2026-04-05 01:15:47.876182 | orchestrator | changed: [testbed-node-0] => (item=lb-health-mgr-sec-grp) 2026-04-05 01:15:47.876193 | orchestrator | 2026-04-05 01:15:47.876202 | orchestrator | TASK [octavia : Add rules for security groups] ********************************* 2026-04-05 01:15:47.876212 | orchestrator | Sunday 05 April 2026 01:12:00 +0000 (0:00:10.917) 0:01:22.895 ********** 2026-04-05 01:15:47.876222 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-mgmt-sec-grp', 'enabled': True}, {'protocol': 'icmp'}]) 2026-04-05 01:15:47.876231 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-mgmt-sec-grp', 'enabled': True}, {'protocol': 'tcp', 'src_port': 22, 'dst_port': 22}]) 2026-04-05 01:15:47.876243 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-mgmt-sec-grp', 'enabled': True}, {'protocol': 'tcp', 'src_port': '9443', 'dst_port': '9443'}]) 2026-04-05 01:15:47.876254 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-health-mgr-sec-grp', 'enabled': True}, {'protocol': 'udp', 'src_port': '5555', 'dst_port': '5555'}]) 2026-04-05 01:15:47.876263 | orchestrator | 2026-04-05 01:15:47.876273 | orchestrator | TASK [octavia : Create loadbalancer management network] ************************ 2026-04-05 01:15:47.876282 | orchestrator | Sunday 05 April 2026 01:12:19 +0000 (0:00:19.364) 0:01:42.260 ********** 2026-04-05 01:15:47.876294 | orchestrator | changed: [testbed-node-0] 2026-04-05 01:15:47.876305 | orchestrator | 2026-04-05 01:15:47.876317 | orchestrator | TASK [octavia : Create loadbalancer management subnet] ************************* 2026-04-05 01:15:47.876327 | orchestrator | Sunday 05 April 2026 01:12:25 +0000 (0:00:05.286) 0:01:47.546 ********** 2026-04-05 01:15:47.876347 | orchestrator | changed: [testbed-node-0] 2026-04-05 01:15:47.876358 | orchestrator | 2026-04-05 01:15:47.876376 | orchestrator | TASK [octavia : Create loadbalancer management router for IPv6] **************** 2026-04-05 01:15:47.876388 | orchestrator | Sunday 05 April 2026 01:12:31 +0000 (0:00:06.559) 0:01:54.106 ********** 2026-04-05 01:15:47.876399 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:15:47.876410 | orchestrator | 2026-04-05 01:15:47.876421 | orchestrator | TASK [octavia : Update loadbalancer management subnet] ************************* 2026-04-05 01:15:47.876432 | orchestrator | Sunday 05 April 2026 01:12:32 +0000 (0:00:00.571) 0:01:54.677 ********** 2026-04-05 01:15:47.876442 | orchestrator | ok: [testbed-node-0] 2026-04-05 01:15:47.876453 | orchestrator | 2026-04-05 01:15:47.876463 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-04-05 01:15:47.876474 | orchestrator | Sunday 05 April 2026 01:12:37 +0000 (0:00:05.508) 0:02:00.185 ********** 2026-04-05 01:15:47.876485 | orchestrator | included: /ansible/roles/octavia/tasks/hm-interface.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 01:15:47.876496 | orchestrator | 2026-04-05 01:15:47.876507 | orchestrator | TASK [octavia : Create ports for Octavia health-manager nodes] ***************** 2026-04-05 01:15:47.876518 | orchestrator | Sunday 05 April 2026 01:12:38 +0000 (0:00:00.912) 0:02:01.098 ********** 2026-04-05 01:15:47.876530 | orchestrator | changed: [testbed-node-2] 2026-04-05 01:15:47.876539 | orchestrator | changed: [testbed-node-0] 2026-04-05 01:15:47.876549 | orchestrator | changed: [testbed-node-1] 2026-04-05 01:15:47.876559 | orchestrator | 2026-04-05 01:15:47.876569 | orchestrator | TASK [octavia : Update Octavia health manager port host_id] ******************** 2026-04-05 01:15:47.876578 | orchestrator | Sunday 05 April 2026 01:12:44 +0000 (0:00:06.202) 0:02:07.301 ********** 2026-04-05 01:15:47.876609 | orchestrator | changed: [testbed-node-2] 2026-04-05 01:15:47.876619 | orchestrator | changed: [testbed-node-1] 2026-04-05 01:15:47.876629 | orchestrator | changed: [testbed-node-0] 2026-04-05 01:15:47.876638 | orchestrator | 2026-04-05 01:15:47.876648 | orchestrator | TASK [octavia : Add Octavia port to openvswitch br-int] ************************ 2026-04-05 01:15:47.876657 | orchestrator | Sunday 05 April 2026 01:12:50 +0000 (0:00:05.811) 0:02:13.112 ********** 2026-04-05 01:15:47.876667 | orchestrator | changed: [testbed-node-0] 2026-04-05 01:15:47.876676 | orchestrator | changed: [testbed-node-1] 2026-04-05 01:15:47.876686 | orchestrator | changed: [testbed-node-2] 2026-04-05 01:15:47.876695 | orchestrator | 2026-04-05 01:15:47.876705 | orchestrator | TASK [octavia : Install isc-dhcp-client package] ******************************* 2026-04-05 01:15:47.876714 | orchestrator | Sunday 05 April 2026 01:12:51 +0000 (0:00:00.785) 0:02:13.898 ********** 2026-04-05 01:15:47.876724 | orchestrator | ok: [testbed-node-1] 2026-04-05 01:15:47.876733 | orchestrator | ok: [testbed-node-2] 2026-04-05 01:15:47.876743 | orchestrator | ok: [testbed-node-0] 2026-04-05 01:15:47.876752 | orchestrator | 2026-04-05 01:15:47.876762 | orchestrator | TASK [octavia : Create octavia dhclient conf] ********************************** 2026-04-05 01:15:47.876771 | orchestrator | Sunday 05 April 2026 01:12:53 +0000 (0:00:02.045) 0:02:15.943 ********** 2026-04-05 01:15:47.876781 | orchestrator | changed: [testbed-node-1] 2026-04-05 01:15:47.876791 | orchestrator | changed: [testbed-node-2] 2026-04-05 01:15:47.876800 | orchestrator | changed: [testbed-node-0] 2026-04-05 01:15:47.876809 | orchestrator | 2026-04-05 01:15:47.876819 | orchestrator | TASK [octavia : Create octavia-interface service] ****************************** 2026-04-05 01:15:47.876829 | orchestrator | Sunday 05 April 2026 01:12:54 +0000 (0:00:01.402) 0:02:17.346 ********** 2026-04-05 01:15:47.876838 | orchestrator | changed: [testbed-node-0] 2026-04-05 01:15:47.876847 | orchestrator | changed: [testbed-node-1] 2026-04-05 01:15:47.876857 | orchestrator | changed: [testbed-node-2] 2026-04-05 01:15:47.876866 | orchestrator | 2026-04-05 01:15:47.876876 | orchestrator | TASK [octavia : Restart octavia-interface.service if required] ***************** 2026-04-05 01:15:47.876885 | orchestrator | Sunday 05 April 2026 01:12:56 +0000 (0:00:01.226) 0:02:18.573 ********** 2026-04-05 01:15:47.876895 | orchestrator | changed: [testbed-node-1] 2026-04-05 01:15:47.876913 | orchestrator | changed: [testbed-node-2] 2026-04-05 01:15:47.876922 | orchestrator | changed: [testbed-node-0] 2026-04-05 01:15:47.876932 | orchestrator | 2026-04-05 01:15:47.876969 | orchestrator | TASK [octavia : Enable and start octavia-interface.service] ******************** 2026-04-05 01:15:47.876981 | orchestrator | Sunday 05 April 2026 01:12:58 +0000 (0:00:02.314) 0:02:20.888 ********** 2026-04-05 01:15:47.876991 | orchestrator | changed: [testbed-node-1] 2026-04-05 01:15:47.877000 | orchestrator | changed: [testbed-node-0] 2026-04-05 01:15:47.877010 | orchestrator | changed: [testbed-node-2] 2026-04-05 01:15:47.877019 | orchestrator | 2026-04-05 01:15:47.877029 | orchestrator | TASK [octavia : Wait for interface ohm0 ip appear] ***************************** 2026-04-05 01:15:47.877038 | orchestrator | Sunday 05 April 2026 01:13:00 +0000 (0:00:01.615) 0:02:22.503 ********** 2026-04-05 01:15:47.877048 | orchestrator | ok: [testbed-node-0] 2026-04-05 01:15:47.877057 | orchestrator | ok: [testbed-node-1] 2026-04-05 01:15:47.877067 | orchestrator | ok: [testbed-node-2] 2026-04-05 01:15:47.877077 | orchestrator | 2026-04-05 01:15:47.877086 | orchestrator | TASK [octavia : Gather facts] ************************************************** 2026-04-05 01:15:47.877096 | orchestrator | Sunday 05 April 2026 01:13:00 +0000 (0:00:00.708) 0:02:23.211 ********** 2026-04-05 01:15:47.877105 | orchestrator | ok: [testbed-node-2] 2026-04-05 01:15:47.877115 | orchestrator | ok: [testbed-node-1] 2026-04-05 01:15:47.877124 | orchestrator | ok: [testbed-node-0] 2026-04-05 01:15:47.877134 | orchestrator | 2026-04-05 01:15:47.877143 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-04-05 01:15:47.877153 | orchestrator | Sunday 05 April 2026 01:13:03 +0000 (0:00:03.082) 0:02:26.293 ********** 2026-04-05 01:15:47.877162 | orchestrator | included: /ansible/roles/octavia/tasks/get_resources_info.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 01:15:47.877172 | orchestrator | 2026-04-05 01:15:47.877181 | orchestrator | TASK [octavia : Get amphora flavor info] *************************************** 2026-04-05 01:15:47.877191 | orchestrator | Sunday 05 April 2026 01:13:04 +0000 (0:00:00.885) 0:02:27.179 ********** 2026-04-05 01:15:47.877200 | orchestrator | ok: [testbed-node-0] 2026-04-05 01:15:47.877210 | orchestrator | 2026-04-05 01:15:47.877219 | orchestrator | TASK [octavia : Get service project id] **************************************** 2026-04-05 01:15:47.877229 | orchestrator | Sunday 05 April 2026 01:13:09 +0000 (0:00:04.555) 0:02:31.735 ********** 2026-04-05 01:15:47.877238 | orchestrator | ok: [testbed-node-0] 2026-04-05 01:15:47.877248 | orchestrator | 2026-04-05 01:15:47.877262 | orchestrator | TASK [octavia : Get security groups for octavia] ******************************* 2026-04-05 01:15:47.877272 | orchestrator | Sunday 05 April 2026 01:13:13 +0000 (0:00:03.926) 0:02:35.661 ********** 2026-04-05 01:15:47.877281 | orchestrator | ok: [testbed-node-0] => (item=lb-mgmt-sec-grp) 2026-04-05 01:15:47.877291 | orchestrator | ok: [testbed-node-0] => (item=lb-health-mgr-sec-grp) 2026-04-05 01:15:47.877300 | orchestrator | 2026-04-05 01:15:47.877309 | orchestrator | TASK [octavia : Get loadbalancer management network] *************************** 2026-04-05 01:15:47.877319 | orchestrator | Sunday 05 April 2026 01:13:20 +0000 (0:00:07.759) 0:02:43.420 ********** 2026-04-05 01:15:47.877328 | orchestrator | ok: [testbed-node-0] 2026-04-05 01:15:47.877338 | orchestrator | 2026-04-05 01:15:47.877347 | orchestrator | TASK [octavia : Set octavia resources facts] *********************************** 2026-04-05 01:15:47.877357 | orchestrator | Sunday 05 April 2026 01:13:24 +0000 (0:00:03.905) 0:02:47.326 ********** 2026-04-05 01:15:47.877366 | orchestrator | ok: [testbed-node-0] 2026-04-05 01:15:47.877376 | orchestrator | ok: [testbed-node-1] 2026-04-05 01:15:47.877385 | orchestrator | ok: [testbed-node-2] 2026-04-05 01:15:47.877395 | orchestrator | 2026-04-05 01:15:47.877404 | orchestrator | TASK [octavia : Ensuring config directories exist] ***************************** 2026-04-05 01:15:47.877414 | orchestrator | Sunday 05 April 2026 01:13:25 +0000 (0:00:00.295) 0:02:47.622 ********** 2026-04-05 01:15:47.877427 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-05 01:15:47.877471 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-05 01:15:47.877484 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-05 01:15:47.877499 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-04-05 01:15:47.877510 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-04-05 01:15:47.877520 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-04-05 01:15:47.877537 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-04-05 01:15:47.877548 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-04-05 01:15:47.877599 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-04-05 01:15:47.877612 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-04-05 01:15:47.877627 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-04-05 01:15:47.877637 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-04-05 01:15:47.877647 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-04-05 01:15:47.877663 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-04-05 01:15:47.877673 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-04-05 01:15:47.877683 | orchestrator | 2026-04-05 01:15:47.877693 | orchestrator | TASK [octavia : Check if policies shall be overwritten] ************************ 2026-04-05 01:15:47.877703 | orchestrator | Sunday 05 April 2026 01:13:28 +0000 (0:00:02.984) 0:02:50.607 ********** 2026-04-05 01:15:47.877713 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:15:47.877722 | orchestrator | 2026-04-05 01:15:47.877755 | orchestrator | TASK [octavia : Set octavia policy file] *************************************** 2026-04-05 01:15:47.877766 | orchestrator | Sunday 05 April 2026 01:13:28 +0000 (0:00:00.143) 0:02:50.751 ********** 2026-04-05 01:15:47.877776 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:15:47.877785 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:15:47.877795 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:15:47.877804 | orchestrator | 2026-04-05 01:15:47.877814 | orchestrator | TASK [octavia : Copying over existing policy file] ***************************** 2026-04-05 01:15:47.877823 | orchestrator | Sunday 05 April 2026 01:13:28 +0000 (0:00:00.288) 0:02:51.040 ********** 2026-04-05 01:15:47.877833 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-04-05 01:15:47.877848 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-04-05 01:15:47.877866 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-04-05 01:15:47.877876 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-04-05 01:15:47.877886 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-04-05 01:15:47.877896 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:15:47.877930 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-04-05 01:15:47.877941 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-04-05 01:15:47.877956 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-04-05 01:15:47.877973 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-04-05 01:15:47.877983 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-04-05 01:15:47.877993 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:15:47.878003 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-04-05 01:15:47.878086 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-04-05 01:15:47.878102 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-04-05 01:15:47.878112 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-04-05 01:15:47.878137 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-04-05 01:15:47.878147 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:15:47.878157 | orchestrator | 2026-04-05 01:15:47.878167 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-04-05 01:15:47.878176 | orchestrator | Sunday 05 April 2026 01:13:29 +0000 (0:00:00.677) 0:02:51.717 ********** 2026-04-05 01:15:47.878186 | orchestrator | included: /ansible/roles/octavia/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 01:15:47.878195 | orchestrator | 2026-04-05 01:15:47.878205 | orchestrator | TASK [service-cert-copy : octavia | Copying over extra CA certificates] ******** 2026-04-05 01:15:47.878215 | orchestrator | Sunday 05 April 2026 01:13:30 +0000 (0:00:00.755) 0:02:52.472 ********** 2026-04-05 01:15:47.878224 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-05 01:15:47.878260 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-05 01:15:47.878272 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-05 01:15:47.878293 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-04-05 01:15:47.878304 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-04-05 01:15:47.878314 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-04-05 01:15:47.878324 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-04-05 01:15:47.878334 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-04-05 01:15:47.878350 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-04-05 01:15:47.878360 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-04-05 01:15:47.878380 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-04-05 01:15:47.878391 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-04-05 01:15:47.878400 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-04-05 01:15:47.878410 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-04-05 01:15:47.878429 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-04-05 01:15:47.878439 | orchestrator | 2026-04-05 01:15:47.878448 | orchestrator | TASK [service-cert-copy : octavia | Copying over backend internal TLS certificate] *** 2026-04-05 01:15:47.878458 | orchestrator | Sunday 05 April 2026 01:13:35 +0000 (0:00:05.436) 0:02:57.909 ********** 2026-04-05 01:15:47.878468 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-04-05 01:15:47.878488 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-04-05 01:15:47.878498 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-04-05 01:15:47.878509 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-04-05 01:15:47.878518 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-04-05 01:15:47.878528 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:15:47.878545 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-04-05 01:15:47.878561 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-04-05 01:15:47.878571 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-04-05 01:15:47.878645 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-04-05 01:15:47.878659 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-04-05 01:15:47.878669 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:15:47.878679 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-04-05 01:15:47.878690 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-04-05 01:15:47.878706 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-04-05 01:15:47.878724 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-04-05 01:15:47.878738 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-04-05 01:15:47.878748 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:15:47.878758 | orchestrator | 2026-04-05 01:15:47.878768 | orchestrator | TASK [service-cert-copy : octavia | Copying over backend internal TLS key] ***** 2026-04-05 01:15:47.878777 | orchestrator | Sunday 05 April 2026 01:13:36 +0000 (0:00:00.738) 0:02:58.648 ********** 2026-04-05 01:15:47.878787 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-04-05 01:15:47.878797 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-04-05 01:15:47.878808 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-04-05 01:15:47.878829 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-04-05 01:15:47.878840 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-04-05 01:15:47.878850 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:15:47.878864 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-04-05 01:15:47.878874 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-04-05 01:15:47.878885 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-04-05 01:15:47.878895 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-04-05 01:15:47.878917 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-04-05 01:15:47.878927 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:15:47.878937 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-04-05 01:15:47.878952 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-04-05 01:15:47.878963 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-04-05 01:15:47.878973 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-04-05 01:15:47.878983 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-04-05 01:15:47.879001 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:15:47.879011 | orchestrator | 2026-04-05 01:15:47.879021 | orchestrator | TASK [octavia : Copying over config.json files for services] ******************* 2026-04-05 01:15:47.879030 | orchestrator | Sunday 05 April 2026 01:13:37 +0000 (0:00:01.166) 0:02:59.815 ********** 2026-04-05 01:15:47.879046 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-05 01:15:47.879057 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-05 01:15:47.879067 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-05 01:15:47.879077 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-04-05 01:15:47.879087 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-04-05 01:15:47.879103 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-04-05 01:15:47.879146 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-04-05 01:15:47.879157 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-04-05 01:15:47.879171 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-04-05 01:15:47.879181 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-04-05 01:15:47.879191 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-04-05 01:15:47.879207 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-04-05 01:15:47.879222 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-04-05 01:15:47.879231 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-04-05 01:15:47.879239 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-04-05 01:15:47.879247 | orchestrator | 2026-04-05 01:15:47.879255 | orchestrator | TASK [octavia : Copying over octavia-wsgi.conf] ******************************** 2026-04-05 01:15:47.879263 | orchestrator | Sunday 05 April 2026 01:13:42 +0000 (0:00:05.543) 0:03:05.359 ********** 2026-04-05 01:15:47.879275 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/octavia/templates/octavia-wsgi.conf.j2) 2026-04-05 01:15:47.879283 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/octavia/templates/octavia-wsgi.conf.j2) 2026-04-05 01:15:47.879291 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/octavia/templates/octavia-wsgi.conf.j2) 2026-04-05 01:15:47.879299 | orchestrator | 2026-04-05 01:15:47.879307 | orchestrator | TASK [octavia : Copying over octavia.conf] ************************************* 2026-04-05 01:15:47.879314 | orchestrator | Sunday 05 April 2026 01:13:44 +0000 (0:00:01.799) 0:03:07.158 ********** 2026-04-05 01:15:47.879323 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-05 01:15:47.879336 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-05 01:15:47.879350 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-05 01:15:47.879358 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-04-05 01:15:47.879370 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-04-05 01:15:47.879379 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-04-05 01:15:47.879387 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-04-05 01:15:47.879400 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-04-05 01:15:47.879408 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-04-05 01:15:47.879420 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-04-05 01:15:47.879429 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-04-05 01:15:47.879441 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-04-05 01:15:47.879449 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-04-05 01:15:47.879462 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-04-05 01:15:47.879471 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-04-05 01:15:47.879478 | orchestrator | 2026-04-05 01:15:47.879487 | orchestrator | TASK [octavia : Copying over Octavia SSH key] ********************************** 2026-04-05 01:15:47.879494 | orchestrator | Sunday 05 April 2026 01:14:01 +0000 (0:00:17.253) 0:03:24.411 ********** 2026-04-05 01:15:47.879502 | orchestrator | changed: [testbed-node-0] 2026-04-05 01:15:47.879510 | orchestrator | changed: [testbed-node-1] 2026-04-05 01:15:47.879518 | orchestrator | changed: [testbed-node-2] 2026-04-05 01:15:47.879526 | orchestrator | 2026-04-05 01:15:47.879534 | orchestrator | TASK [octavia : Copying certificate files for octavia-worker] ****************** 2026-04-05 01:15:47.879542 | orchestrator | Sunday 05 April 2026 01:14:03 +0000 (0:00:01.992) 0:03:26.404 ********** 2026-04-05 01:15:47.879549 | orchestrator | changed: [testbed-node-1] => (item=client.cert-and-key.pem) 2026-04-05 01:15:47.879557 | orchestrator | changed: [testbed-node-0] => (item=client.cert-and-key.pem) 2026-04-05 01:15:47.879569 | orchestrator | changed: [testbed-node-2] => (item=client.cert-and-key.pem) 2026-04-05 01:15:47.879577 | orchestrator | changed: [testbed-node-1] => (item=client_ca.cert.pem) 2026-04-05 01:15:47.879600 | orchestrator | changed: [testbed-node-0] => (item=client_ca.cert.pem) 2026-04-05 01:15:47.879608 | orchestrator | changed: [testbed-node-2] => (item=client_ca.cert.pem) 2026-04-05 01:15:47.879616 | orchestrator | changed: [testbed-node-1] => (item=server_ca.cert.pem) 2026-04-05 01:15:47.879624 | orchestrator | changed: [testbed-node-2] => (item=server_ca.cert.pem) 2026-04-05 01:15:47.879632 | orchestrator | changed: [testbed-node-0] => (item=server_ca.cert.pem) 2026-04-05 01:15:47.879640 | orchestrator | changed: [testbed-node-1] => (item=server_ca.key.pem) 2026-04-05 01:15:47.879647 | orchestrator | changed: [testbed-node-2] => (item=server_ca.key.pem) 2026-04-05 01:15:47.879655 | orchestrator | changed: [testbed-node-0] => (item=server_ca.key.pem) 2026-04-05 01:15:47.879663 | orchestrator | 2026-04-05 01:15:47.879671 | orchestrator | TASK [octavia : Copying certificate files for octavia-housekeeping] ************ 2026-04-05 01:15:47.879678 | orchestrator | Sunday 05 April 2026 01:14:09 +0000 (0:00:05.350) 0:03:31.755 ********** 2026-04-05 01:15:47.879686 | orchestrator | changed: [testbed-node-1] => (item=client.cert-and-key.pem) 2026-04-05 01:15:47.879694 | orchestrator | changed: [testbed-node-2] => (item=client.cert-and-key.pem) 2026-04-05 01:15:47.879701 | orchestrator | changed: [testbed-node-0] => (item=client.cert-and-key.pem) 2026-04-05 01:15:47.879709 | orchestrator | changed: [testbed-node-1] => (item=client_ca.cert.pem) 2026-04-05 01:15:47.879717 | orchestrator | changed: [testbed-node-2] => (item=client_ca.cert.pem) 2026-04-05 01:15:47.879724 | orchestrator | changed: [testbed-node-0] => (item=client_ca.cert.pem) 2026-04-05 01:15:47.879732 | orchestrator | changed: [testbed-node-1] => (item=server_ca.cert.pem) 2026-04-05 01:15:47.879740 | orchestrator | changed: [testbed-node-2] => (item=server_ca.cert.pem) 2026-04-05 01:15:47.879753 | orchestrator | changed: [testbed-node-0] => (item=server_ca.cert.pem) 2026-04-05 01:15:47.879761 | orchestrator | changed: [testbed-node-1] => (item=server_ca.key.pem) 2026-04-05 01:15:47.879772 | orchestrator | changed: [testbed-node-2] => (item=server_ca.key.pem) 2026-04-05 01:15:47.879780 | orchestrator | changed: [testbed-node-0] => (item=server_ca.key.pem) 2026-04-05 01:15:47.879788 | orchestrator | 2026-04-05 01:15:47.879796 | orchestrator | TASK [octavia : Copying certificate files for octavia-health-manager] ********** 2026-04-05 01:15:47.879803 | orchestrator | Sunday 05 April 2026 01:14:14 +0000 (0:00:05.397) 0:03:37.152 ********** 2026-04-05 01:15:47.879811 | orchestrator | changed: [testbed-node-0] => (item=client.cert-and-key.pem) 2026-04-05 01:15:47.879819 | orchestrator | changed: [testbed-node-1] => (item=client.cert-and-key.pem) 2026-04-05 01:15:47.879827 | orchestrator | changed: [testbed-node-2] => (item=client.cert-and-key.pem) 2026-04-05 01:15:47.879834 | orchestrator | changed: [testbed-node-0] => (item=client_ca.cert.pem) 2026-04-05 01:15:47.879842 | orchestrator | changed: [testbed-node-1] => (item=client_ca.cert.pem) 2026-04-05 01:15:47.879850 | orchestrator | changed: [testbed-node-2] => (item=client_ca.cert.pem) 2026-04-05 01:15:47.879858 | orchestrator | changed: [testbed-node-0] => (item=server_ca.cert.pem) 2026-04-05 01:15:47.879865 | orchestrator | changed: [testbed-node-1] => (item=server_ca.cert.pem) 2026-04-05 01:15:47.879873 | orchestrator | changed: [testbed-node-2] => (item=server_ca.cert.pem) 2026-04-05 01:15:47.879881 | orchestrator | changed: [testbed-node-1] => (item=server_ca.key.pem) 2026-04-05 01:15:47.879888 | orchestrator | changed: [testbed-node-2] => (item=server_ca.key.pem) 2026-04-05 01:15:47.879896 | orchestrator | changed: [testbed-node-0] => (item=server_ca.key.pem) 2026-04-05 01:15:47.879904 | orchestrator | 2026-04-05 01:15:47.879912 | orchestrator | TASK [octavia : Check octavia containers] ************************************** 2026-04-05 01:15:47.879919 | orchestrator | Sunday 05 April 2026 01:14:19 +0000 (0:00:05.135) 0:03:42.288 ********** 2026-04-05 01:15:47.879927 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-05 01:15:47.879941 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-05 01:15:47.879950 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-05 01:15:47.879967 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-04-05 01:15:47.879975 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-04-05 01:15:47.879984 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-04-05 01:15:47.879992 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-04-05 01:15:47.880004 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-04-05 01:15:47.880013 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-04-05 01:15:47.880028 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-04-05 01:15:47.880040 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-04-05 01:15:47.880049 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-04-05 01:15:47.880057 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-04-05 01:15:47.880065 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-04-05 01:15:47.880078 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-04-05 01:15:47.880092 | orchestrator | 2026-04-05 01:15:47.880114 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-04-05 01:15:47.880128 | orchestrator | Sunday 05 April 2026 01:14:23 +0000 (0:00:04.059) 0:03:46.348 ********** 2026-04-05 01:15:47.880142 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:15:47.880155 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:15:47.880167 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:15:47.880175 | orchestrator | 2026-04-05 01:15:47.880183 | orchestrator | TASK [octavia : Creating Octavia database] ************************************* 2026-04-05 01:15:47.880190 | orchestrator | Sunday 05 April 2026 01:14:24 +0000 (0:00:00.533) 0:03:46.881 ********** 2026-04-05 01:15:47.880198 | orchestrator | changed: [testbed-node-0] 2026-04-05 01:15:47.880206 | orchestrator | 2026-04-05 01:15:47.880213 | orchestrator | TASK [octavia : Creating Octavia persistence database] ************************* 2026-04-05 01:15:47.880221 | orchestrator | Sunday 05 April 2026 01:14:26 +0000 (0:00:02.573) 0:03:49.455 ********** 2026-04-05 01:15:47.880229 | orchestrator | changed: [testbed-node-0] 2026-04-05 01:15:47.880236 | orchestrator | 2026-04-05 01:15:47.880244 | orchestrator | TASK [octavia : Creating Octavia database user and setting permissions] ******** 2026-04-05 01:15:47.880252 | orchestrator | Sunday 05 April 2026 01:14:29 +0000 (0:00:02.524) 0:03:51.979 ********** 2026-04-05 01:15:47.880259 | orchestrator | changed: [testbed-node-0] 2026-04-05 01:15:47.880267 | orchestrator | 2026-04-05 01:15:47.880275 | orchestrator | TASK [octavia : Creating Octavia persistence database user and setting permissions] *** 2026-04-05 01:15:47.880283 | orchestrator | Sunday 05 April 2026 01:14:32 +0000 (0:00:02.734) 0:03:54.713 ********** 2026-04-05 01:15:47.880290 | orchestrator | changed: [testbed-node-0] 2026-04-05 01:15:47.880298 | orchestrator | 2026-04-05 01:15:47.880306 | orchestrator | TASK [octavia : Running Octavia bootstrap container] *************************** 2026-04-05 01:15:47.880313 | orchestrator | Sunday 05 April 2026 01:14:34 +0000 (0:00:02.704) 0:03:57.418 ********** 2026-04-05 01:15:47.880321 | orchestrator | changed: [testbed-node-0] 2026-04-05 01:15:47.880328 | orchestrator | 2026-04-05 01:15:47.880340 | orchestrator | TASK [octavia : Flush handlers] ************************************************ 2026-04-05 01:15:47.880348 | orchestrator | Sunday 05 April 2026 01:14:58 +0000 (0:00:24.042) 0:04:21.461 ********** 2026-04-05 01:15:47.880356 | orchestrator | 2026-04-05 01:15:47.880364 | orchestrator | TASK [octavia : Flush handlers] ************************************************ 2026-04-05 01:15:47.880371 | orchestrator | Sunday 05 April 2026 01:14:59 +0000 (0:00:00.070) 0:04:21.532 ********** 2026-04-05 01:15:47.880379 | orchestrator | 2026-04-05 01:15:47.880387 | orchestrator | TASK [octavia : Flush handlers] ************************************************ 2026-04-05 01:15:47.880395 | orchestrator | Sunday 05 April 2026 01:14:59 +0000 (0:00:00.068) 0:04:21.600 ********** 2026-04-05 01:15:47.880402 | orchestrator | 2026-04-05 01:15:47.880410 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-api container] ********************** 2026-04-05 01:15:47.880418 | orchestrator | Sunday 05 April 2026 01:14:59 +0000 (0:00:00.071) 0:04:21.671 ********** 2026-04-05 01:15:47.880426 | orchestrator | changed: [testbed-node-0] 2026-04-05 01:15:47.880433 | orchestrator | changed: [testbed-node-1] 2026-04-05 01:15:47.880441 | orchestrator | changed: [testbed-node-2] 2026-04-05 01:15:47.880449 | orchestrator | 2026-04-05 01:15:47.880457 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-driver-agent container] ************* 2026-04-05 01:15:47.880464 | orchestrator | Sunday 05 April 2026 01:15:09 +0000 (0:00:10.762) 0:04:32.433 ********** 2026-04-05 01:15:47.880472 | orchestrator | changed: [testbed-node-1] 2026-04-05 01:15:47.880480 | orchestrator | changed: [testbed-node-2] 2026-04-05 01:15:47.880487 | orchestrator | changed: [testbed-node-0] 2026-04-05 01:15:47.880495 | orchestrator | 2026-04-05 01:15:47.880503 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-health-manager container] *********** 2026-04-05 01:15:47.880511 | orchestrator | Sunday 05 April 2026 01:15:18 +0000 (0:00:08.295) 0:04:40.729 ********** 2026-04-05 01:15:47.880518 | orchestrator | changed: [testbed-node-0] 2026-04-05 01:15:47.880526 | orchestrator | changed: [testbed-node-2] 2026-04-05 01:15:47.880534 | orchestrator | changed: [testbed-node-1] 2026-04-05 01:15:47.880547 | orchestrator | 2026-04-05 01:15:47.880554 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-housekeeping container] ************* 2026-04-05 01:15:47.880562 | orchestrator | Sunday 05 April 2026 01:15:28 +0000 (0:00:10.477) 0:04:51.207 ********** 2026-04-05 01:15:47.880570 | orchestrator | changed: [testbed-node-0] 2026-04-05 01:15:47.880577 | orchestrator | changed: [testbed-node-2] 2026-04-05 01:15:47.880608 | orchestrator | changed: [testbed-node-1] 2026-04-05 01:15:47.880616 | orchestrator | 2026-04-05 01:15:47.880624 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-worker container] ******************* 2026-04-05 01:15:47.880632 | orchestrator | Sunday 05 April 2026 01:15:39 +0000 (0:00:10.662) 0:05:01.869 ********** 2026-04-05 01:15:47.880640 | orchestrator | changed: [testbed-node-0] 2026-04-05 01:15:47.880647 | orchestrator | changed: [testbed-node-1] 2026-04-05 01:15:47.880655 | orchestrator | changed: [testbed-node-2] 2026-04-05 01:15:47.880663 | orchestrator | 2026-04-05 01:15:47.880670 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-05 01:15:47.880678 | orchestrator | testbed-node-0 : ok=57  changed=38  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-04-05 01:15:47.880687 | orchestrator | testbed-node-1 : ok=33  changed=22  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-04-05 01:15:47.880695 | orchestrator | testbed-node-2 : ok=33  changed=22  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-04-05 01:15:47.880703 | orchestrator | 2026-04-05 01:15:47.880711 | orchestrator | 2026-04-05 01:15:47.880718 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-05 01:15:47.880726 | orchestrator | Sunday 05 April 2026 01:15:45 +0000 (0:00:05.697) 0:05:07.566 ********** 2026-04-05 01:15:47.880739 | orchestrator | =============================================================================== 2026-04-05 01:15:47.880747 | orchestrator | octavia : Running Octavia bootstrap container -------------------------- 24.04s 2026-04-05 01:15:47.880754 | orchestrator | octavia : Add rules for security groups -------------------------------- 19.36s 2026-04-05 01:15:47.880764 | orchestrator | octavia : Copying over octavia.conf ------------------------------------ 17.25s 2026-04-05 01:15:47.880777 | orchestrator | octavia : Adding octavia related roles --------------------------------- 17.25s 2026-04-05 01:15:47.880790 | orchestrator | octavia : Create security groups for octavia --------------------------- 10.92s 2026-04-05 01:15:47.880802 | orchestrator | octavia : Restart octavia-api container -------------------------------- 10.76s 2026-04-05 01:15:47.880814 | orchestrator | octavia : Restart octavia-housekeeping container ----------------------- 10.66s 2026-04-05 01:15:47.880827 | orchestrator | octavia : Restart octavia-health-manager container --------------------- 10.48s 2026-04-05 01:15:47.880839 | orchestrator | service-ks-register : octavia | Creating users -------------------------- 9.75s 2026-04-05 01:15:47.880850 | orchestrator | service-ks-register : octavia | Granting user roles --------------------- 8.98s 2026-04-05 01:15:47.880863 | orchestrator | octavia : Restart octavia-driver-agent container ------------------------ 8.30s 2026-04-05 01:15:47.880876 | orchestrator | service-ks-register : octavia | Creating endpoints ---------------------- 8.23s 2026-04-05 01:15:47.880888 | orchestrator | octavia : Get security groups for octavia ------------------------------- 7.76s 2026-04-05 01:15:47.880899 | orchestrator | octavia : Create loadbalancer management subnet ------------------------- 6.56s 2026-04-05 01:15:47.880911 | orchestrator | octavia : Create ports for Octavia health-manager nodes ----------------- 6.20s 2026-04-05 01:15:47.880924 | orchestrator | octavia : Update Octavia health manager port host_id -------------------- 5.81s 2026-04-05 01:15:47.880937 | orchestrator | octavia : Restart octavia-worker container ------------------------------ 5.70s 2026-04-05 01:15:47.880950 | orchestrator | octavia : Copying over config.json files for services ------------------- 5.54s 2026-04-05 01:15:47.880987 | orchestrator | octavia : Update loadbalancer management subnet ------------------------- 5.51s 2026-04-05 01:15:47.881013 | orchestrator | service-cert-copy : octavia | Copying over extra CA certificates -------- 5.44s 2026-04-05 01:15:50.923639 | orchestrator | 2026-04-05 01:15:50 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-04-05 01:15:53.971998 | orchestrator | 2026-04-05 01:15:53 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-04-05 01:15:57.024572 | orchestrator | 2026-04-05 01:15:57 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-04-05 01:16:00.073858 | orchestrator | 2026-04-05 01:16:00 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-04-05 01:16:03.122202 | orchestrator | 2026-04-05 01:16:03 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-04-05 01:16:06.162993 | orchestrator | 2026-04-05 01:16:06 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-04-05 01:16:09.208819 | orchestrator | 2026-04-05 01:16:09 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-04-05 01:16:12.251938 | orchestrator | 2026-04-05 01:16:12 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-04-05 01:16:15.294297 | orchestrator | 2026-04-05 01:16:15 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-04-05 01:16:18.333976 | orchestrator | 2026-04-05 01:16:18 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-04-05 01:16:21.379396 | orchestrator | 2026-04-05 01:16:21 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-04-05 01:16:24.425605 | orchestrator | 2026-04-05 01:16:24 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-04-05 01:16:27.478097 | orchestrator | 2026-04-05 01:16:27 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-04-05 01:16:30.518410 | orchestrator | 2026-04-05 01:16:30 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-04-05 01:16:33.561542 | orchestrator | 2026-04-05 01:16:33 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-04-05 01:16:36.608328 | orchestrator | 2026-04-05 01:16:36 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-04-05 01:16:39.656470 | orchestrator | 2026-04-05 01:16:39 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-04-05 01:16:42.700967 | orchestrator | 2026-04-05 01:16:42 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-04-05 01:16:45.745047 | orchestrator | 2026-04-05 01:16:45 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-04-05 01:16:48.783326 | orchestrator | 2026-04-05 01:16:49.021497 | orchestrator | 2026-04-05 01:16:49.026231 | orchestrator | --> DEPLOY IN A NUTSHELL -- END -- Sun Apr 5 01:16:49 UTC 2026 2026-04-05 01:16:49.026323 | orchestrator | 2026-04-05 01:16:49.350739 | orchestrator | ok: Runtime: 0:35:21.006209 2026-04-05 01:16:49.623034 | 2026-04-05 01:16:49.623175 | TASK [Bootstrap services] 2026-04-05 01:16:50.396628 | orchestrator | 2026-04-05 01:16:50.396822 | orchestrator | # BOOTSTRAP 2026-04-05 01:16:50.396839 | orchestrator | 2026-04-05 01:16:50.396847 | orchestrator | + set -e 2026-04-05 01:16:50.396855 | orchestrator | + echo 2026-04-05 01:16:50.396864 | orchestrator | + echo '# BOOTSTRAP' 2026-04-05 01:16:50.396876 | orchestrator | + echo 2026-04-05 01:16:50.396906 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap-services.sh 2026-04-05 01:16:50.407694 | orchestrator | + set -e 2026-04-05 01:16:50.407782 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap/300-openstack.sh 2026-04-05 01:16:56.442714 | orchestrator | 2026-04-05 01:16:56 | INFO  | It takes a moment until task dd4080a4-1ea0-4d29-9001-67a6e05003db (flavor-manager) has been started and output is visible here. 2026-04-05 01:17:07.193351 | orchestrator | 2026-04-05 01:17:01 | INFO  | Flavor SCS-1L-1 created 2026-04-05 01:17:07.194094 | orchestrator | 2026-04-05 01:17:02 | INFO  | Flavor SCS-1L-1-5 created 2026-04-05 01:17:07.194116 | orchestrator | 2026-04-05 01:17:02 | INFO  | Flavor SCS-1V-2 created 2026-04-05 01:17:07.194125 | orchestrator | 2026-04-05 01:17:03 | INFO  | Flavor SCS-1V-2-5 created 2026-04-05 01:17:07.194133 | orchestrator | 2026-04-05 01:17:03 | INFO  | Flavor SCS-1V-4 created 2026-04-05 01:17:07.194141 | orchestrator | 2026-04-05 01:17:03 | INFO  | Flavor SCS-1V-4-10 created 2026-04-05 01:17:07.194149 | orchestrator | 2026-04-05 01:17:03 | INFO  | Flavor SCS-1V-8 created 2026-04-05 01:17:07.194157 | orchestrator | 2026-04-05 01:17:03 | INFO  | Flavor SCS-1V-8-20 created 2026-04-05 01:17:07.194172 | orchestrator | 2026-04-05 01:17:03 | INFO  | Flavor SCS-2V-4 created 2026-04-05 01:17:07.194179 | orchestrator | 2026-04-05 01:17:04 | INFO  | Flavor SCS-2V-4-10 created 2026-04-05 01:17:07.194187 | orchestrator | 2026-04-05 01:17:04 | INFO  | Flavor SCS-2V-8 created 2026-04-05 01:17:07.194194 | orchestrator | 2026-04-05 01:17:04 | INFO  | Flavor SCS-2V-8-20 created 2026-04-05 01:17:07.194202 | orchestrator | 2026-04-05 01:17:04 | INFO  | Flavor SCS-2V-16 created 2026-04-05 01:17:07.194209 | orchestrator | 2026-04-05 01:17:04 | INFO  | Flavor SCS-2V-16-50 created 2026-04-05 01:17:07.194216 | orchestrator | 2026-04-05 01:17:04 | INFO  | Flavor SCS-4V-8 created 2026-04-05 01:17:07.194224 | orchestrator | 2026-04-05 01:17:04 | INFO  | Flavor SCS-4V-8-20 created 2026-04-05 01:17:07.194231 | orchestrator | 2026-04-05 01:17:05 | INFO  | Flavor SCS-4V-16 created 2026-04-05 01:17:07.194239 | orchestrator | 2026-04-05 01:17:05 | INFO  | Flavor SCS-4V-16-50 created 2026-04-05 01:17:07.194247 | orchestrator | 2026-04-05 01:17:05 | INFO  | Flavor SCS-4V-32 created 2026-04-05 01:17:07.194254 | orchestrator | 2026-04-05 01:17:05 | INFO  | Flavor SCS-4V-32-100 created 2026-04-05 01:17:07.194262 | orchestrator | 2026-04-05 01:17:05 | INFO  | Flavor SCS-8V-16 created 2026-04-05 01:17:07.194269 | orchestrator | 2026-04-05 01:17:05 | INFO  | Flavor SCS-8V-16-50 created 2026-04-05 01:17:07.194276 | orchestrator | 2026-04-05 01:17:05 | INFO  | Flavor SCS-8V-32 created 2026-04-05 01:17:07.194284 | orchestrator | 2026-04-05 01:17:06 | INFO  | Flavor SCS-8V-32-100 created 2026-04-05 01:17:07.194291 | orchestrator | 2026-04-05 01:17:06 | INFO  | Flavor SCS-16V-32 created 2026-04-05 01:17:07.194299 | orchestrator | 2026-04-05 01:17:06 | INFO  | Flavor SCS-16V-32-100 created 2026-04-05 01:17:07.194306 | orchestrator | 2026-04-05 01:17:06 | INFO  | Flavor SCS-2V-4-20s created 2026-04-05 01:17:07.194314 | orchestrator | 2026-04-05 01:17:06 | INFO  | Flavor SCS-4V-8-50s created 2026-04-05 01:17:07.194321 | orchestrator | 2026-04-05 01:17:06 | INFO  | Flavor SCS-4V-16-100s created 2026-04-05 01:17:07.194329 | orchestrator | 2026-04-05 01:17:06 | INFO  | Flavor SCS-8V-32-100s created 2026-04-05 01:17:08.566060 | orchestrator | 2026-04-05 01:17:08 | INFO  | Trying to run play bootstrap-basic in environment openstack 2026-04-05 01:17:18.699158 | orchestrator | 2026-04-05 01:17:18 | INFO  | Prepare task for execution of bootstrap-basic. 2026-04-05 01:17:18.805102 | orchestrator | 2026-04-05 01:17:18 | INFO  | Task 991126dd-804e-4036-9cc6-db2115fdc733 (bootstrap-basic) was prepared for execution. 2026-04-05 01:17:18.805224 | orchestrator | 2026-04-05 01:17:18 | INFO  | It takes a moment until task 991126dd-804e-4036-9cc6-db2115fdc733 (bootstrap-basic) has been started and output is visible here. 2026-04-05 01:18:08.296222 | orchestrator | 2026-04-05 01:18:08.296362 | orchestrator | PLAY [Bootstrap basic OpenStack services] ************************************** 2026-04-05 01:18:08.296384 | orchestrator | 2026-04-05 01:18:08.296397 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-04-05 01:18:08.296453 | orchestrator | Sunday 05 April 2026 01:17:22 +0000 (0:00:00.117) 0:00:00.117 ********** 2026-04-05 01:18:08.296468 | orchestrator | ok: [localhost] 2026-04-05 01:18:08.296480 | orchestrator | 2026-04-05 01:18:08.296492 | orchestrator | TASK [Get volume type LUKS] **************************************************** 2026-04-05 01:18:08.296504 | orchestrator | Sunday 05 April 2026 01:17:24 +0000 (0:00:02.124) 0:00:02.241 ********** 2026-04-05 01:18:08.296519 | orchestrator | ok: [localhost] 2026-04-05 01:18:08.296531 | orchestrator | 2026-04-05 01:18:08.296543 | orchestrator | TASK [Create volume type LUKS] ************************************************* 2026-04-05 01:18:08.296556 | orchestrator | Sunday 05 April 2026 01:17:34 +0000 (0:00:09.849) 0:00:12.091 ********** 2026-04-05 01:18:08.296568 | orchestrator | changed: [localhost] 2026-04-05 01:18:08.296580 | orchestrator | 2026-04-05 01:18:08.296590 | orchestrator | TASK [Create public network] *************************************************** 2026-04-05 01:18:08.296602 | orchestrator | Sunday 05 April 2026 01:17:43 +0000 (0:00:08.803) 0:00:20.894 ********** 2026-04-05 01:18:08.296614 | orchestrator | changed: [localhost] 2026-04-05 01:18:08.296625 | orchestrator | 2026-04-05 01:18:08.296643 | orchestrator | TASK [Set public network to default] ******************************************* 2026-04-05 01:18:08.296656 | orchestrator | Sunday 05 April 2026 01:17:48 +0000 (0:00:05.544) 0:00:26.438 ********** 2026-04-05 01:18:08.296667 | orchestrator | changed: [localhost] 2026-04-05 01:18:08.296678 | orchestrator | 2026-04-05 01:18:08.296689 | orchestrator | TASK [Create public subnet] **************************************************** 2026-04-05 01:18:08.296699 | orchestrator | Sunday 05 April 2026 01:17:55 +0000 (0:00:06.717) 0:00:33.155 ********** 2026-04-05 01:18:08.296733 | orchestrator | changed: [localhost] 2026-04-05 01:18:08.296745 | orchestrator | 2026-04-05 01:18:08.296757 | orchestrator | TASK [Create default IPv4 subnet pool] ***************************************** 2026-04-05 01:18:08.296769 | orchestrator | Sunday 05 April 2026 01:18:00 +0000 (0:00:04.761) 0:00:37.916 ********** 2026-04-05 01:18:08.296780 | orchestrator | changed: [localhost] 2026-04-05 01:18:08.296792 | orchestrator | 2026-04-05 01:18:08.296804 | orchestrator | TASK [Create manager role] ***************************************************** 2026-04-05 01:18:08.296830 | orchestrator | Sunday 05 April 2026 01:18:04 +0000 (0:00:04.097) 0:00:42.014 ********** 2026-04-05 01:18:08.296847 | orchestrator | ok: [localhost] 2026-04-05 01:18:08.296860 | orchestrator | 2026-04-05 01:18:08.296872 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-05 01:18:08.296884 | orchestrator | localhost : ok=8  changed=5  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-05 01:18:08.296898 | orchestrator | 2026-04-05 01:18:08.296909 | orchestrator | 2026-04-05 01:18:08.296922 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-05 01:18:08.296934 | orchestrator | Sunday 05 April 2026 01:18:08 +0000 (0:00:03.836) 0:00:45.850 ********** 2026-04-05 01:18:08.296945 | orchestrator | =============================================================================== 2026-04-05 01:18:08.296957 | orchestrator | Get volume type LUKS ---------------------------------------------------- 9.85s 2026-04-05 01:18:08.296998 | orchestrator | Create volume type LUKS ------------------------------------------------- 8.80s 2026-04-05 01:18:08.297013 | orchestrator | Set public network to default ------------------------------------------- 6.72s 2026-04-05 01:18:08.297024 | orchestrator | Create public network --------------------------------------------------- 5.54s 2026-04-05 01:18:08.297035 | orchestrator | Create public subnet ---------------------------------------------------- 4.76s 2026-04-05 01:18:08.297047 | orchestrator | Create default IPv4 subnet pool ----------------------------------------- 4.10s 2026-04-05 01:18:08.297059 | orchestrator | Create manager role ----------------------------------------------------- 3.84s 2026-04-05 01:18:08.297073 | orchestrator | Gathering Facts --------------------------------------------------------- 2.12s 2026-04-05 01:18:10.303416 | orchestrator | 2026-04-05 01:18:10 | INFO  | It takes a moment until task 89250df2-5c0b-49a3-a746-0659d43854d2 (image-manager) has been started and output is visible here. 2026-04-05 01:18:54.472188 | orchestrator | 2026-04-05 01:18:13 | INFO  | Processing image 'Cirros 0.6.2' 2026-04-05 01:18:54.472299 | orchestrator | 2026-04-05 01:18:13 | INFO  | Tested URL https://github.com/cirros-dev/cirros/releases/download/0.6.2/cirros-0.6.2-x86_64-disk.img: 302 2026-04-05 01:18:54.472313 | orchestrator | 2026-04-05 01:18:13 | INFO  | Importing image Cirros 0.6.2 2026-04-05 01:18:54.472350 | orchestrator | 2026-04-05 01:18:13 | INFO  | Importing from URL https://github.com/cirros-dev/cirros/releases/download/0.6.2/cirros-0.6.2-x86_64-disk.img 2026-04-05 01:18:54.472360 | orchestrator | 2026-04-05 01:18:15 | INFO  | Waiting for image to leave queued state... 2026-04-05 01:18:54.472368 | orchestrator | 2026-04-05 01:18:17 | INFO  | Waiting for import to complete... 2026-04-05 01:18:54.472376 | orchestrator | 2026-04-05 01:18:28 | INFO  | Import of 'Cirros 0.6.2' successfully completed, reloading images 2026-04-05 01:18:54.472384 | orchestrator | 2026-04-05 01:18:28 | INFO  | Checking parameters of 'Cirros 0.6.2' 2026-04-05 01:18:54.472391 | orchestrator | 2026-04-05 01:18:28 | INFO  | Setting internal_version = 0.6.2 2026-04-05 01:18:54.472398 | orchestrator | 2026-04-05 01:18:28 | INFO  | Setting image_original_user = cirros 2026-04-05 01:18:54.472406 | orchestrator | 2026-04-05 01:18:28 | INFO  | Adding tag os:cirros 2026-04-05 01:18:54.472413 | orchestrator | 2026-04-05 01:18:28 | INFO  | Setting property architecture: x86_64 2026-04-05 01:18:54.472420 | orchestrator | 2026-04-05 01:18:29 | INFO  | Setting property hw_disk_bus: scsi 2026-04-05 01:18:54.472426 | orchestrator | 2026-04-05 01:18:29 | INFO  | Setting property hw_rng_model: virtio 2026-04-05 01:18:54.472433 | orchestrator | 2026-04-05 01:18:29 | INFO  | Setting property hw_scsi_model: virtio-scsi 2026-04-05 01:18:54.472440 | orchestrator | 2026-04-05 01:18:30 | INFO  | Setting property hw_watchdog_action: reset 2026-04-05 01:18:54.472447 | orchestrator | 2026-04-05 01:18:30 | INFO  | Setting property hypervisor_type: qemu 2026-04-05 01:18:54.472462 | orchestrator | 2026-04-05 01:18:30 | INFO  | Setting property os_distro: cirros 2026-04-05 01:18:54.472469 | orchestrator | 2026-04-05 01:18:30 | INFO  | Setting property os_purpose: minimal 2026-04-05 01:18:54.472475 | orchestrator | 2026-04-05 01:18:30 | INFO  | Setting property replace_frequency: never 2026-04-05 01:18:54.472482 | orchestrator | 2026-04-05 01:18:31 | INFO  | Setting property uuid_validity: none 2026-04-05 01:18:54.472489 | orchestrator | 2026-04-05 01:18:31 | INFO  | Setting property provided_until: none 2026-04-05 01:18:54.472496 | orchestrator | 2026-04-05 01:18:31 | INFO  | Setting property image_description: Cirros 2026-04-05 01:18:54.472502 | orchestrator | 2026-04-05 01:18:31 | INFO  | Setting property image_name: Cirros 2026-04-05 01:18:54.472528 | orchestrator | 2026-04-05 01:18:31 | INFO  | Setting property internal_version: 0.6.2 2026-04-05 01:18:54.472535 | orchestrator | 2026-04-05 01:18:32 | INFO  | Setting property image_original_user: cirros 2026-04-05 01:18:54.472542 | orchestrator | 2026-04-05 01:18:32 | INFO  | Setting property os_version: 0.6.2 2026-04-05 01:18:54.472550 | orchestrator | 2026-04-05 01:18:32 | INFO  | Setting property image_source: https://github.com/cirros-dev/cirros/releases/download/0.6.2/cirros-0.6.2-x86_64-disk.img 2026-04-05 01:18:54.472557 | orchestrator | 2026-04-05 01:18:32 | INFO  | Setting property image_build_date: 2023-05-30 2026-04-05 01:18:54.472564 | orchestrator | 2026-04-05 01:18:33 | INFO  | Checking status of 'Cirros 0.6.2' 2026-04-05 01:18:54.472571 | orchestrator | 2026-04-05 01:18:33 | INFO  | Checking visibility of 'Cirros 0.6.2' 2026-04-05 01:18:54.472581 | orchestrator | 2026-04-05 01:18:33 | INFO  | Setting visibility of 'Cirros 0.6.2' to 'public' 2026-04-05 01:18:54.472588 | orchestrator | 2026-04-05 01:18:33 | INFO  | Processing image 'Cirros 0.6.3' 2026-04-05 01:18:54.472595 | orchestrator | 2026-04-05 01:18:33 | INFO  | Tested URL https://github.com/cirros-dev/cirros/releases/download/0.6.3/cirros-0.6.3-x86_64-disk.img: 302 2026-04-05 01:18:54.472602 | orchestrator | 2026-04-05 01:18:33 | INFO  | Importing image Cirros 0.6.3 2026-04-05 01:18:54.472608 | orchestrator | 2026-04-05 01:18:33 | INFO  | Importing from URL https://github.com/cirros-dev/cirros/releases/download/0.6.3/cirros-0.6.3-x86_64-disk.img 2026-04-05 01:18:54.472615 | orchestrator | 2026-04-05 01:18:35 | INFO  | Waiting for image to leave queued state... 2026-04-05 01:18:54.472622 | orchestrator | 2026-04-05 01:18:37 | INFO  | Waiting for import to complete... 2026-04-05 01:18:54.472642 | orchestrator | 2026-04-05 01:18:47 | INFO  | Import of 'Cirros 0.6.3' successfully completed, reloading images 2026-04-05 01:18:54.472650 | orchestrator | 2026-04-05 01:18:48 | INFO  | Checking parameters of 'Cirros 0.6.3' 2026-04-05 01:18:54.472656 | orchestrator | 2026-04-05 01:18:48 | INFO  | Setting internal_version = 0.6.3 2026-04-05 01:18:54.472663 | orchestrator | 2026-04-05 01:18:48 | INFO  | Setting image_original_user = cirros 2026-04-05 01:18:54.472670 | orchestrator | 2026-04-05 01:18:48 | INFO  | Adding tag os:cirros 2026-04-05 01:18:54.472676 | orchestrator | 2026-04-05 01:18:48 | INFO  | Setting property architecture: x86_64 2026-04-05 01:18:54.472683 | orchestrator | 2026-04-05 01:18:48 | INFO  | Setting property hw_disk_bus: scsi 2026-04-05 01:18:54.472690 | orchestrator | 2026-04-05 01:18:48 | INFO  | Setting property hw_rng_model: virtio 2026-04-05 01:18:54.472696 | orchestrator | 2026-04-05 01:18:49 | INFO  | Setting property hw_scsi_model: virtio-scsi 2026-04-05 01:18:54.472703 | orchestrator | 2026-04-05 01:18:49 | INFO  | Setting property hw_watchdog_action: reset 2026-04-05 01:18:54.472710 | orchestrator | 2026-04-05 01:18:49 | INFO  | Setting property hypervisor_type: qemu 2026-04-05 01:18:54.472717 | orchestrator | 2026-04-05 01:18:50 | INFO  | Setting property os_distro: cirros 2026-04-05 01:18:54.472725 | orchestrator | 2026-04-05 01:18:50 | INFO  | Setting property os_purpose: minimal 2026-04-05 01:18:54.472733 | orchestrator | 2026-04-05 01:18:50 | INFO  | Setting property replace_frequency: never 2026-04-05 01:18:54.472741 | orchestrator | 2026-04-05 01:18:50 | INFO  | Setting property uuid_validity: none 2026-04-05 01:18:54.472783 | orchestrator | 2026-04-05 01:18:51 | INFO  | Setting property provided_until: none 2026-04-05 01:18:54.472792 | orchestrator | 2026-04-05 01:18:51 | INFO  | Setting property image_description: Cirros 2026-04-05 01:18:54.472806 | orchestrator | 2026-04-05 01:18:51 | INFO  | Setting property image_name: Cirros 2026-04-05 01:18:54.472814 | orchestrator | 2026-04-05 01:18:51 | INFO  | Setting property internal_version: 0.6.3 2026-04-05 01:18:54.472821 | orchestrator | 2026-04-05 01:18:52 | INFO  | Setting property image_original_user: cirros 2026-04-05 01:18:54.472829 | orchestrator | 2026-04-05 01:18:52 | INFO  | Setting property os_version: 0.6.3 2026-04-05 01:18:54.472837 | orchestrator | 2026-04-05 01:18:53 | INFO  | Setting property image_source: https://github.com/cirros-dev/cirros/releases/download/0.6.3/cirros-0.6.3-x86_64-disk.img 2026-04-05 01:18:54.472845 | orchestrator | 2026-04-05 01:18:53 | INFO  | Setting property image_build_date: 2024-09-26 2026-04-05 01:18:54.472852 | orchestrator | 2026-04-05 01:18:53 | INFO  | Checking status of 'Cirros 0.6.3' 2026-04-05 01:18:54.472860 | orchestrator | 2026-04-05 01:18:53 | INFO  | Checking visibility of 'Cirros 0.6.3' 2026-04-05 01:18:54.472868 | orchestrator | 2026-04-05 01:18:53 | INFO  | Setting visibility of 'Cirros 0.6.3' to 'public' 2026-04-05 01:18:54.781319 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap/301-openstack-octavia-amhpora-image.sh 2026-04-05 01:18:56.830345 | orchestrator | 2026-04-05 01:18:56 | INFO  | date: 2026-04-04 2026-04-05 01:18:56.830449 | orchestrator | 2026-04-05 01:18:56 | INFO  | image: octavia-amphora-haproxy-2024.2.20260404.qcow2 2026-04-05 01:18:56.830489 | orchestrator | 2026-04-05 01:18:56 | INFO  | url: https://nbg1.your-objectstorage.com/osism/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20260404.qcow2 2026-04-05 01:18:56.830653 | orchestrator | 2026-04-05 01:18:56 | INFO  | checksum_url: https://nbg1.your-objectstorage.com/osism/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20260404.qcow2.CHECKSUM 2026-04-05 01:18:57.034845 | orchestrator | 2026-04-05 01:18:57 | INFO  | checksum: a325ef68c8cd1b8dae221b5c125377c7eaecd99eff7b17de50feb7cba34e61c9 2026-04-05 01:18:57.132903 | orchestrator | 2026-04-05 01:18:57 | INFO  | It takes a moment until task 1f7b5643-5d51-4399-ad35-e082b7f63f97 (image-manager) has been started and output is visible here. 2026-04-05 01:20:10.467544 | orchestrator | 2026-04-05 01:18:59 | INFO  | Processing image 'OpenStack Octavia Amphora 2026-04-04' 2026-04-05 01:20:10.467657 | orchestrator | 2026-04-05 01:18:59 | INFO  | Tested URL https://nbg1.your-objectstorage.com/osism/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20260404.qcow2: 200 2026-04-05 01:20:10.467669 | orchestrator | 2026-04-05 01:18:59 | INFO  | Importing image OpenStack Octavia Amphora 2026-04-04 2026-04-05 01:20:10.467679 | orchestrator | 2026-04-05 01:18:59 | INFO  | Importing from URL https://nbg1.your-objectstorage.com/osism/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20260404.qcow2 2026-04-05 01:20:10.467693 | orchestrator | 2026-04-05 01:19:01 | INFO  | Waiting for image to leave queued state... 2026-04-05 01:20:10.467707 | orchestrator | 2026-04-05 01:19:03 | INFO  | Waiting for import to complete... 2026-04-05 01:20:10.467725 | orchestrator | 2026-04-05 01:19:13 | INFO  | Waiting for import to complete... 2026-04-05 01:20:10.467736 | orchestrator | 2026-04-05 01:19:23 | INFO  | Waiting for import to complete... 2026-04-05 01:20:10.467749 | orchestrator | 2026-04-05 01:19:33 | INFO  | Waiting for import to complete... 2026-04-05 01:20:10.467763 | orchestrator | 2026-04-05 01:19:43 | INFO  | Waiting for import to complete... 2026-04-05 01:20:10.467775 | orchestrator | 2026-04-05 01:19:53 | INFO  | Waiting for import to complete... 2026-04-05 01:20:10.467786 | orchestrator | 2026-04-05 01:20:03 | INFO  | Import of 'OpenStack Octavia Amphora 2026-04-04' successfully completed, reloading images 2026-04-05 01:20:10.467870 | orchestrator | 2026-04-05 01:20:04 | INFO  | Checking parameters of 'OpenStack Octavia Amphora 2026-04-04' 2026-04-05 01:20:10.467885 | orchestrator | 2026-04-05 01:20:04 | INFO  | Setting internal_version = 2026-04-04 2026-04-05 01:20:10.467898 | orchestrator | 2026-04-05 01:20:04 | INFO  | Setting image_original_user = ubuntu 2026-04-05 01:20:10.467911 | orchestrator | 2026-04-05 01:20:04 | INFO  | Adding tag amphora 2026-04-05 01:20:10.467923 | orchestrator | 2026-04-05 01:20:04 | INFO  | Adding tag os:ubuntu 2026-04-05 01:20:10.467935 | orchestrator | 2026-04-05 01:20:04 | INFO  | Setting property architecture: x86_64 2026-04-05 01:20:10.467952 | orchestrator | 2026-04-05 01:20:05 | INFO  | Setting property hw_disk_bus: scsi 2026-04-05 01:20:10.467964 | orchestrator | 2026-04-05 01:20:05 | INFO  | Setting property hw_rng_model: virtio 2026-04-05 01:20:10.467976 | orchestrator | 2026-04-05 01:20:05 | INFO  | Setting property hw_scsi_model: virtio-scsi 2026-04-05 01:20:10.467988 | orchestrator | 2026-04-05 01:20:05 | INFO  | Setting property hw_watchdog_action: reset 2026-04-05 01:20:10.467999 | orchestrator | 2026-04-05 01:20:06 | INFO  | Setting property hypervisor_type: qemu 2026-04-05 01:20:10.468011 | orchestrator | 2026-04-05 01:20:06 | INFO  | Setting property os_distro: ubuntu 2026-04-05 01:20:10.468022 | orchestrator | 2026-04-05 01:20:06 | INFO  | Setting property replace_frequency: quarterly 2026-04-05 01:20:10.468034 | orchestrator | 2026-04-05 01:20:07 | INFO  | Setting property uuid_validity: last-1 2026-04-05 01:20:10.468047 | orchestrator | 2026-04-05 01:20:07 | INFO  | Setting property provided_until: none 2026-04-05 01:20:10.468059 | orchestrator | 2026-04-05 01:20:07 | INFO  | Setting property os_purpose: network 2026-04-05 01:20:10.468071 | orchestrator | 2026-04-05 01:20:07 | INFO  | Setting property image_description: OpenStack Octavia Amphora 2026-04-05 01:20:10.468095 | orchestrator | 2026-04-05 01:20:08 | INFO  | Setting property image_name: OpenStack Octavia Amphora 2026-04-05 01:20:10.468104 | orchestrator | 2026-04-05 01:20:08 | INFO  | Setting property internal_version: 2026-04-04 2026-04-05 01:20:10.468113 | orchestrator | 2026-04-05 01:20:08 | INFO  | Setting property image_original_user: ubuntu 2026-04-05 01:20:10.468121 | orchestrator | 2026-04-05 01:20:09 | INFO  | Setting property os_version: 2026-04-04 2026-04-05 01:20:10.468129 | orchestrator | 2026-04-05 01:20:09 | INFO  | Setting property image_source: https://nbg1.your-objectstorage.com/osism/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20260404.qcow2 2026-04-05 01:20:10.468138 | orchestrator | 2026-04-05 01:20:09 | INFO  | Setting property image_build_date: 2026-04-04 2026-04-05 01:20:10.468146 | orchestrator | 2026-04-05 01:20:09 | INFO  | Checking status of 'OpenStack Octavia Amphora 2026-04-04' 2026-04-05 01:20:10.468153 | orchestrator | 2026-04-05 01:20:09 | INFO  | Checking visibility of 'OpenStack Octavia Amphora 2026-04-04' 2026-04-05 01:20:10.468178 | orchestrator | 2026-04-05 01:20:10 | INFO  | Processing image 'Cirros 0.6.3' (removal candidate) 2026-04-05 01:20:10.468187 | orchestrator | 2026-04-05 01:20:10 | WARNING  | No image definition found for 'Cirros 0.6.3', image will be ignored 2026-04-05 01:20:10.468196 | orchestrator | 2026-04-05 01:20:10 | INFO  | Processing image 'Cirros 0.6.2' (removal candidate) 2026-04-05 01:20:10.468204 | orchestrator | 2026-04-05 01:20:10 | WARNING  | No image definition found for 'Cirros 0.6.2', image will be ignored 2026-04-05 01:20:10.878988 | orchestrator | ok: Runtime: 0:03:20.762894 2026-04-05 01:20:10.892979 | 2026-04-05 01:20:10.893108 | TASK [Run checks] 2026-04-05 01:20:11.622176 | orchestrator | + set -e 2026-04-05 01:20:11.622403 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-04-05 01:20:11.622429 | orchestrator | ++ export INTERACTIVE=false 2026-04-05 01:20:11.622451 | orchestrator | ++ INTERACTIVE=false 2026-04-05 01:20:11.622465 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-04-05 01:20:11.622477 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-04-05 01:20:11.622491 | orchestrator | + source /opt/configuration/scripts/manager-version.sh 2026-04-05 01:20:11.624674 | orchestrator | +++ awk '-F: ' '/^manager_version:/ { print $2 }' /opt/configuration/environments/manager/configuration.yml 2026-04-05 01:20:11.632242 | orchestrator | 2026-04-05 01:20:11.632317 | orchestrator | # CHECK 2026-04-05 01:20:11.632329 | orchestrator | 2026-04-05 01:20:11.632341 | orchestrator | ++ export MANAGER_VERSION=latest 2026-04-05 01:20:11.632357 | orchestrator | ++ MANAGER_VERSION=latest 2026-04-05 01:20:11.632370 | orchestrator | + echo 2026-04-05 01:20:11.632381 | orchestrator | + echo '# CHECK' 2026-04-05 01:20:11.632392 | orchestrator | + echo 2026-04-05 01:20:11.632408 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2026-04-05 01:20:11.633522 | orchestrator | ++ semver latest 5.0.0 2026-04-05 01:20:11.695371 | orchestrator | 2026-04-05 01:20:11.695456 | orchestrator | ## Containers @ testbed-manager 2026-04-05 01:20:11.695479 | orchestrator | 2026-04-05 01:20:11.695500 | orchestrator | + [[ -1 -eq -1 ]] 2026-04-05 01:20:11.695509 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2026-04-05 01:20:11.695517 | orchestrator | + echo 2026-04-05 01:20:11.695526 | orchestrator | + echo '## Containers @ testbed-manager' 2026-04-05 01:20:11.695535 | orchestrator | + echo 2026-04-05 01:20:11.695543 | orchestrator | + osism container testbed-manager ps 2026-04-05 01:20:12.818386 | orchestrator | 2026-04-05 01:20:12 | INFO  | Creating empty known_hosts file: /share/known_hosts 2026-04-05 01:20:13.207471 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2026-04-05 01:20:13.208761 | orchestrator | e3dffdd6ea6b registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2 "dumb-init --single-…" 15 minutes ago Up 15 minutes prometheus_blackbox_exporter 2026-04-05 01:20:13.208886 | orchestrator | af035ce45c09 registry.osism.tech/kolla/prometheus-alertmanager:2024.2 "dumb-init --single-…" 15 minutes ago Up 15 minutes prometheus_alertmanager 2026-04-05 01:20:13.208902 | orchestrator | 7c6af8922e07 registry.osism.tech/kolla/prometheus-cadvisor:2024.2 "dumb-init --single-…" 15 minutes ago Up 15 minutes prometheus_cadvisor 2026-04-05 01:20:13.208920 | orchestrator | 6418d93646d3 registry.osism.tech/kolla/prometheus-node-exporter:2024.2 "dumb-init --single-…" 16 minutes ago Up 16 minutes prometheus_node_exporter 2026-04-05 01:20:13.208936 | orchestrator | 14437c5f7bac registry.osism.tech/kolla/prometheus-v2-server:2024.2 "dumb-init --single-…" 16 minutes ago Up 16 minutes prometheus_server 2026-04-05 01:20:13.208948 | orchestrator | d8bbf73b8262 registry.osism.tech/osism/cephclient:reef "/usr/bin/dumb-init …" 18 minutes ago Up 18 minutes cephclient 2026-04-05 01:20:13.208958 | orchestrator | 0784cad81d1a registry.osism.tech/kolla/cron:2024.2 "dumb-init --single-…" 30 minutes ago Up 30 minutes cron 2026-04-05 01:20:13.208969 | orchestrator | cd99d57894e3 registry.osism.tech/kolla/kolla-toolbox:2024.2 "dumb-init --single-…" 31 minutes ago Up 31 minutes kolla_toolbox 2026-04-05 01:20:13.209001 | orchestrator | 834e8dd6eb63 registry.osism.tech/kolla/fluentd:2024.2 "dumb-init --single-…" 31 minutes ago Up 31 minutes fluentd 2026-04-05 01:20:13.209011 | orchestrator | f3a6039feee6 phpmyadmin/phpmyadmin:5.2 "/docker-entrypoint.…" 32 minutes ago Up 31 minutes (healthy) 80/tcp phpmyadmin 2026-04-05 01:20:13.209021 | orchestrator | 724ca840e25c registry.osism.tech/osism/openstackclient:2024.2 "/usr/bin/dumb-init …" 32 minutes ago Up 32 minutes openstackclient 2026-04-05 01:20:13.209031 | orchestrator | 8be2fc07438b registry.osism.tech/osism/homer:v25.10.1 "/bin/sh /entrypoint…" 33 minutes ago Up 32 minutes (healthy) 8080/tcp homer 2026-04-05 01:20:13.209041 | orchestrator | 17c75f8452f0 registry.osism.tech/dockerhub/ubuntu/squid:6.1-23.10_beta "entrypoint.sh -f /e…" 56 minutes ago Up 55 minutes (healthy) 192.168.16.5:3128->3128/tcp squid 2026-04-05 01:20:13.209051 | orchestrator | 8790a703260e registry.osism.tech/osism/inventory-reconciler:latest "/sbin/tini -- /entr…" About an hour ago Up 39 minutes (healthy) manager-inventory_reconciler-1 2026-04-05 01:20:13.209062 | orchestrator | 008df81ee166 registry.osism.tech/osism/osism-kubernetes:latest "/entrypoint.sh osis…" About an hour ago Up 39 minutes (healthy) osism-kubernetes 2026-04-05 01:20:13.209104 | orchestrator | 5a62404e6f9d registry.osism.tech/osism/ceph-ansible:reef "/entrypoint.sh osis…" About an hour ago Up 39 minutes (healthy) ceph-ansible 2026-04-05 01:20:13.209115 | orchestrator | 0927cb8b601c registry.osism.tech/osism/osism-ansible:latest "/entrypoint.sh osis…" About an hour ago Up 39 minutes (healthy) osism-ansible 2026-04-05 01:20:13.209125 | orchestrator | e2cbbede5aed registry.osism.tech/osism/kolla-ansible:2024.2 "/entrypoint.sh osis…" About an hour ago Up 39 minutes (healthy) kolla-ansible 2026-04-05 01:20:13.209135 | orchestrator | 233430fade79 registry.osism.tech/osism/ara-server:1.7.3 "sh -c '/wait && /ru…" About an hour ago Up 40 minutes (healthy) 8000/tcp manager-ara-server-1 2026-04-05 01:20:13.209145 | orchestrator | 993e6c9735bc registry.osism.tech/dockerhub/library/redis:7.4.7-alpine "docker-entrypoint.s…" About an hour ago Up 40 minutes (healthy) 6379/tcp manager-redis-1 2026-04-05 01:20:13.209154 | orchestrator | ffb938afe970 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" About an hour ago Up 40 minutes (healthy) manager-listener-1 2026-04-05 01:20:13.209164 | orchestrator | 71afe9f36b91 registry.osism.tech/dockerhub/library/mariadb:11.8.4 "docker-entrypoint.s…" About an hour ago Up 40 minutes (healthy) 3306/tcp manager-mariadb-1 2026-04-05 01:20:13.209174 | orchestrator | 30a911753033 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" About an hour ago Up 40 minutes (healthy) manager-beat-1 2026-04-05 01:20:13.209192 | orchestrator | 43cbe759d01d registry.osism.tech/osism/osism:latest "/sbin/tini -- sleep…" About an hour ago Up 40 minutes (healthy) osismclient 2026-04-05 01:20:13.209202 | orchestrator | 08f3c3e69f59 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" About an hour ago Up 40 minutes (healthy) manager-openstack-1 2026-04-05 01:20:13.209212 | orchestrator | a3945af6cabb registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" About an hour ago Up 40 minutes (healthy) 192.168.16.5:8000->8000/tcp manager-api-1 2026-04-05 01:20:13.209222 | orchestrator | 51a4c094abf1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" About an hour ago Up 40 minutes (healthy) manager-flower-1 2026-04-05 01:20:13.209232 | orchestrator | c30b79567838 registry.osism.tech/osism/osism-frontend:latest "docker-entrypoint.s…" About an hour ago Up 40 minutes 192.168.16.5:3000->3000/tcp osism-frontend 2026-04-05 01:20:13.209242 | orchestrator | 755a6768193c registry.osism.tech/dockerhub/library/traefik:v3.5.0 "/entrypoint.sh trae…" About an hour ago Up About an hour (healthy) 192.168.16.5:80->80/tcp, 192.168.16.5:443->443/tcp, 192.168.16.5:8122->8080/tcp traefik 2026-04-05 01:20:13.368668 | orchestrator | 2026-04-05 01:20:13.368782 | orchestrator | ## Images @ testbed-manager 2026-04-05 01:20:13.368800 | orchestrator | 2026-04-05 01:20:13.368856 | orchestrator | + echo 2026-04-05 01:20:13.368869 | orchestrator | + echo '## Images @ testbed-manager' 2026-04-05 01:20:13.368881 | orchestrator | + echo 2026-04-05 01:20:13.368897 | orchestrator | + osism container testbed-manager images 2026-04-05 01:20:14.846171 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2026-04-05 01:20:14.846289 | orchestrator | registry.osism.tech/osism/osism-ansible latest a367d0d74b25 About an hour ago 638MB 2026-04-05 01:20:14.846326 | orchestrator | registry.osism.tech/osism/kolla-ansible 2024.2 6412085d20e0 About an hour ago 636MB 2026-04-05 01:20:14.846341 | orchestrator | registry.osism.tech/osism/osism-kubernetes latest 9f041662cb1e About an hour ago 1.24GB 2026-04-05 01:20:14.846352 | orchestrator | registry.osism.tech/osism/osism latest 39ad30e360ac About an hour ago 407MB 2026-04-05 01:20:14.846362 | orchestrator | registry.osism.tech/osism/ceph-ansible reef ae335b1618c4 About an hour ago 585MB 2026-04-05 01:20:14.846372 | orchestrator | registry.osism.tech/osism/osism-frontend latest b82497637790 About an hour ago 212MB 2026-04-05 01:20:14.846381 | orchestrator | registry.osism.tech/osism/inventory-reconciler latest 64857c311a01 About an hour ago 357MB 2026-04-05 01:20:14.846391 | orchestrator | registry.osism.tech/osism/openstackclient 2024.2 2fd96e7e9166 22 hours ago 239MB 2026-04-05 01:20:14.846400 | orchestrator | registry.osism.tech/osism/cephclient reef 0ce6a066fac3 22 hours ago 453MB 2026-04-05 01:20:14.846411 | orchestrator | registry.osism.tech/kolla/fluentd 2024.2 ad0cbbde9181 23 hours ago 590MB 2026-04-05 01:20:14.846420 | orchestrator | registry.osism.tech/kolla/cron 2024.2 59337077c5b9 23 hours ago 277MB 2026-04-05 01:20:14.846430 | orchestrator | registry.osism.tech/kolla/kolla-toolbox 2024.2 46754da5c988 23 hours ago 679MB 2026-04-05 01:20:14.846439 | orchestrator | registry.osism.tech/kolla/prometheus-node-exporter 2024.2 032fd2da0341 23 hours ago 317MB 2026-04-05 01:20:14.846470 | orchestrator | registry.osism.tech/kolla/prometheus-cadvisor 2024.2 f141a5d170da 23 hours ago 368MB 2026-04-05 01:20:14.846480 | orchestrator | registry.osism.tech/kolla/prometheus-blackbox-exporter 2024.2 4f874e7176d3 23 hours ago 319MB 2026-04-05 01:20:14.846492 | orchestrator | registry.osism.tech/kolla/prometheus-v2-server 2024.2 c04b4a9a3c9e 23 hours ago 850MB 2026-04-05 01:20:14.846508 | orchestrator | registry.osism.tech/kolla/prometheus-alertmanager 2024.2 82ba61566881 23 hours ago 415MB 2026-04-05 01:20:14.846529 | orchestrator | registry.osism.tech/dockerhub/library/redis 7.4.7-alpine e08bd8d5a677 2 months ago 41.4MB 2026-04-05 01:20:14.846549 | orchestrator | registry.osism.tech/osism/homer v25.10.1 ea34b371c716 4 months ago 11.5MB 2026-04-05 01:20:14.846564 | orchestrator | registry.osism.tech/dockerhub/library/mariadb 11.8.4 70745dd8f1d0 4 months ago 334MB 2026-04-05 01:20:14.846582 | orchestrator | phpmyadmin/phpmyadmin 5.2 e66b1f5a8c58 5 months ago 742MB 2026-04-05 01:20:14.846597 | orchestrator | registry.osism.tech/osism/ara-server 1.7.3 d1b687333f2f 7 months ago 275MB 2026-04-05 01:20:14.846612 | orchestrator | registry.osism.tech/dockerhub/library/traefik v3.5.0 11cc59587f6a 8 months ago 226MB 2026-04-05 01:20:14.846628 | orchestrator | registry.osism.tech/dockerhub/ubuntu/squid 6.1-23.10_beta 34b6bbbcf74b 22 months ago 146MB 2026-04-05 01:20:15.010904 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2026-04-05 01:20:15.011025 | orchestrator | ++ semver latest 5.0.0 2026-04-05 01:20:15.059334 | orchestrator | 2026-04-05 01:20:15.059438 | orchestrator | ## Containers @ testbed-node-0 2026-04-05 01:20:15.059453 | orchestrator | 2026-04-05 01:20:15.059464 | orchestrator | + [[ -1 -eq -1 ]] 2026-04-05 01:20:15.059474 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2026-04-05 01:20:15.059484 | orchestrator | + echo 2026-04-05 01:20:15.059494 | orchestrator | + echo '## Containers @ testbed-node-0' 2026-04-05 01:20:15.059505 | orchestrator | + echo 2026-04-05 01:20:15.059515 | orchestrator | + osism container testbed-node-0 ps 2026-04-05 01:20:16.599514 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2026-04-05 01:20:16.599622 | orchestrator | ffa2213a2118 registry.osism.tech/kolla/octavia-worker:2024.2 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_worker 2026-04-05 01:20:16.599638 | orchestrator | 494c1bd50d07 registry.osism.tech/kolla/octavia-housekeeping:2024.2 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_housekeeping 2026-04-05 01:20:16.599649 | orchestrator | a5bc84449578 registry.osism.tech/kolla/octavia-health-manager:2024.2 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_health_manager 2026-04-05 01:20:16.599677 | orchestrator | 7e0919ec4968 registry.osism.tech/kolla/octavia-driver-agent:2024.2 "dumb-init --single-…" 5 minutes ago Up 5 minutes octavia_driver_agent 2026-04-05 01:20:16.599688 | orchestrator | 4cbbf960840e registry.osism.tech/kolla/octavia-api:2024.2 "dumb-init --single-…" 5 minutes ago Up 5 minutes (healthy) octavia_api 2026-04-05 01:20:16.599698 | orchestrator | ceae3a043c19 registry.osism.tech/kolla/grafana:2024.2 "dumb-init --single-…" 8 minutes ago Up 8 minutes grafana 2026-04-05 01:20:16.599708 | orchestrator | cab05becccd7 registry.osism.tech/kolla/magnum-conductor:2024.2 "dumb-init --single-…" 8 minutes ago Up 8 minutes (healthy) magnum_conductor 2026-04-05 01:20:16.599718 | orchestrator | 830f3796711f registry.osism.tech/kolla/magnum-api:2024.2 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) magnum_api 2026-04-05 01:20:16.599747 | orchestrator | be46fe3b5d37 registry.osism.tech/kolla/placement-api:2024.2 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) placement_api 2026-04-05 01:20:16.599758 | orchestrator | 6808bdab334e registry.osism.tech/kolla/nova-novncproxy:2024.2 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) nova_novncproxy 2026-04-05 01:20:16.599767 | orchestrator | 88584884a62d registry.osism.tech/kolla/nova-conductor:2024.2 "dumb-init --single-…" 10 minutes ago Up 9 minutes (healthy) nova_conductor 2026-04-05 01:20:16.599777 | orchestrator | 0618fc1f88dc registry.osism.tech/kolla/designate-worker:2024.2 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) designate_worker 2026-04-05 01:20:16.599787 | orchestrator | 1983ecff299d registry.osism.tech/kolla/designate-mdns:2024.2 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) designate_mdns 2026-04-05 01:20:16.599797 | orchestrator | 31b259e18076 registry.osism.tech/kolla/neutron-server:2024.2 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) neutron_server 2026-04-05 01:20:16.599828 | orchestrator | 0e22cceb451e registry.osism.tech/kolla/designate-producer:2024.2 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) designate_producer 2026-04-05 01:20:16.599839 | orchestrator | 61b8563746ac registry.osism.tech/kolla/designate-central:2024.2 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) designate_central 2026-04-05 01:20:16.599849 | orchestrator | a8663d1f77be registry.osism.tech/kolla/designate-api:2024.2 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) designate_api 2026-04-05 01:20:16.599859 | orchestrator | cdc6d0bb403d registry.osism.tech/kolla/designate-backend-bind9:2024.2 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) designate_backend_bind9 2026-04-05 01:20:16.599868 | orchestrator | 958a4c9b55b5 registry.osism.tech/kolla/barbican-worker:2024.2 "dumb-init --single-…" 12 minutes ago Up 12 minutes (healthy) barbican_worker 2026-04-05 01:20:16.599878 | orchestrator | 5d37cd27646a registry.osism.tech/kolla/nova-api:2024.2 "dumb-init --single-…" 12 minutes ago Up 12 minutes (healthy) nova_api 2026-04-05 01:20:16.599888 | orchestrator | 199365bc5461 registry.osism.tech/kolla/barbican-keystone-listener:2024.2 "dumb-init --single-…" 12 minutes ago Up 12 minutes (healthy) barbican_keystone_listener 2026-04-05 01:20:16.599917 | orchestrator | 8e587e6b1b9c registry.osism.tech/kolla/barbican-api:2024.2 "dumb-init --single-…" 12 minutes ago Up 12 minutes (healthy) barbican_api 2026-04-05 01:20:16.599933 | orchestrator | fac4ef637fa5 registry.osism.tech/kolla/nova-scheduler:2024.2 "dumb-init --single-…" 13 minutes ago Up 9 minutes (healthy) nova_scheduler 2026-04-05 01:20:16.599943 | orchestrator | d7ddfebecaac registry.osism.tech/kolla/cinder-backup:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) cinder_backup 2026-04-05 01:20:16.599953 | orchestrator | c2dcff234885 registry.osism.tech/kolla/cinder-volume:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) cinder_volume 2026-04-05 01:20:16.599968 | orchestrator | aa466b8ce7c2 registry.osism.tech/kolla/glance-api:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) glance_api 2026-04-05 01:20:16.599978 | orchestrator | 052b91c06ed3 registry.osism.tech/kolla/cinder-scheduler:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) cinder_scheduler 2026-04-05 01:20:16.599988 | orchestrator | 37df5d5bf0b5 registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2 "dumb-init --single-…" 15 minutes ago Up 15 minutes prometheus_elasticsearch_exporter 2026-04-05 01:20:16.600006 | orchestrator | 7a3736b3b122 registry.osism.tech/kolla/cinder-api:2024.2 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) cinder_api 2026-04-05 01:20:16.600016 | orchestrator | 309e3bdf67a2 registry.osism.tech/kolla/prometheus-cadvisor:2024.2 "dumb-init --single-…" 15 minutes ago Up 15 minutes prometheus_cadvisor 2026-04-05 01:20:16.600026 | orchestrator | 96f2480aa45d registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2 "dumb-init --single-…" 15 minutes ago Up 15 minutes prometheus_memcached_exporter 2026-04-05 01:20:16.600036 | orchestrator | c6c840557f7a registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2 "dumb-init --single-…" 15 minutes ago Up 15 minutes prometheus_mysqld_exporter 2026-04-05 01:20:16.600045 | orchestrator | 2bfbbe6ae700 registry.osism.tech/kolla/prometheus-node-exporter:2024.2 "dumb-init --single-…" 16 minutes ago Up 16 minutes prometheus_node_exporter 2026-04-05 01:20:16.600055 | orchestrator | da8f1ef155fc registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-mgr -…" 17 minutes ago Up 17 minutes ceph-mgr-testbed-node-0 2026-04-05 01:20:16.600065 | orchestrator | be1fcf5d709c registry.osism.tech/kolla/keystone:2024.2 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) keystone 2026-04-05 01:20:16.600075 | orchestrator | 538796b061b3 registry.osism.tech/kolla/keystone-fernet:2024.2 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) keystone_fernet 2026-04-05 01:20:16.600084 | orchestrator | 0c983d89771c registry.osism.tech/kolla/keystone-ssh:2024.2 "dumb-init --single-…" 19 minutes ago Up 19 minutes (healthy) keystone_ssh 2026-04-05 01:20:16.600094 | orchestrator | 5d1683633616 registry.osism.tech/kolla/horizon:2024.2 "dumb-init --single-…" 19 minutes ago Up 19 minutes (healthy) horizon 2026-04-05 01:20:16.600104 | orchestrator | 736076bcb720 registry.osism.tech/kolla/mariadb-server:2024.2 "dumb-init -- kolla_…" 20 minutes ago Up 20 minutes (healthy) mariadb 2026-04-05 01:20:16.600114 | orchestrator | 636e9b7fda70 registry.osism.tech/kolla/opensearch-dashboards:2024.2 "dumb-init --single-…" 22 minutes ago Up 22 minutes (healthy) opensearch_dashboards 2026-04-05 01:20:16.600124 | orchestrator | 97116c02deec registry.osism.tech/kolla/opensearch:2024.2 "dumb-init --single-…" 23 minutes ago Up 23 minutes (healthy) opensearch 2026-04-05 01:20:16.600134 | orchestrator | 4fc54aa40b7c registry.osism.tech/kolla/keepalived:2024.2 "dumb-init --single-…" 23 minutes ago Up 23 minutes keepalived 2026-04-05 01:20:16.600144 | orchestrator | 4b6d28072851 registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-crash" 24 minutes ago Up 24 minutes ceph-crash-testbed-node-0 2026-04-05 01:20:16.600153 | orchestrator | 5ebad71fe929 registry.osism.tech/kolla/proxysql:2024.2 "dumb-init --single-…" 24 minutes ago Up 24 minutes (healthy) proxysql 2026-04-05 01:20:16.600171 | orchestrator | 58c9c43c1ff2 registry.osism.tech/kolla/haproxy:2024.2 "dumb-init --single-…" 24 minutes ago Up 24 minutes (healthy) haproxy 2026-04-05 01:20:16.600182 | orchestrator | a313be65f886 registry.osism.tech/kolla/ovn-northd:2024.2 "dumb-init --single-…" 27 minutes ago Up 27 minutes ovn_northd 2026-04-05 01:20:16.600196 | orchestrator | c0b76626756c registry.osism.tech/kolla/ovn-sb-db-server:2024.2 "dumb-init --single-…" 27 minutes ago Up 27 minutes ovn_sb_db 2026-04-05 01:20:16.600219 | orchestrator | 2180d8e7c919 registry.osism.tech/kolla/ovn-nb-db-server:2024.2 "dumb-init --single-…" 27 minutes ago Up 27 minutes ovn_nb_db 2026-04-05 01:20:16.600235 | orchestrator | e5ff8ec9616d registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-mon -…" 28 minutes ago Up 28 minutes ceph-mon-testbed-node-0 2026-04-05 01:20:16.600250 | orchestrator | 490d8d981860 registry.osism.tech/kolla/ovn-controller:2024.2 "dumb-init --single-…" 28 minutes ago Up 28 minutes ovn_controller 2026-04-05 01:20:16.600266 | orchestrator | 13637819e331 registry.osism.tech/kolla/rabbitmq:2024.2 "dumb-init --single-…" 29 minutes ago Up 29 minutes (healthy) rabbitmq 2026-04-05 01:20:16.600282 | orchestrator | 9f6a171b347c registry.osism.tech/kolla/openvswitch-vswitchd:2024.2 "dumb-init --single-…" 30 minutes ago Up 29 minutes (healthy) openvswitch_vswitchd 2026-04-05 01:20:16.600299 | orchestrator | cb41ceec137e registry.osism.tech/kolla/openvswitch-db-server:2024.2 "dumb-init --single-…" 30 minutes ago Up 30 minutes (healthy) openvswitch_db 2026-04-05 01:20:16.600317 | orchestrator | f60303a6e717 registry.osism.tech/kolla/redis-sentinel:2024.2 "dumb-init --single-…" 30 minutes ago Up 30 minutes (healthy) redis_sentinel 2026-04-05 01:20:16.600333 | orchestrator | 97339a035f44 registry.osism.tech/kolla/redis:2024.2 "dumb-init --single-…" 30 minutes ago Up 30 minutes (healthy) redis 2026-04-05 01:20:16.600348 | orchestrator | 88ad39d675a5 registry.osism.tech/kolla/memcached:2024.2 "dumb-init --single-…" 30 minutes ago Up 30 minutes (healthy) memcached 2026-04-05 01:20:16.600358 | orchestrator | f14d7cfd71e8 registry.osism.tech/kolla/cron:2024.2 "dumb-init --single-…" 30 minutes ago Up 30 minutes cron 2026-04-05 01:20:16.600368 | orchestrator | 104ae61d00d6 registry.osism.tech/kolla/kolla-toolbox:2024.2 "dumb-init --single-…" 31 minutes ago Up 31 minutes kolla_toolbox 2026-04-05 01:20:16.600378 | orchestrator | 6d5675aabaae registry.osism.tech/kolla/fluentd:2024.2 "dumb-init --single-…" 32 minutes ago Up 32 minutes fluentd 2026-04-05 01:20:16.767588 | orchestrator | 2026-04-05 01:20:16.767726 | orchestrator | ## Images @ testbed-node-0 2026-04-05 01:20:16.767755 | orchestrator | 2026-04-05 01:20:16.767776 | orchestrator | + echo 2026-04-05 01:20:16.767797 | orchestrator | + echo '## Images @ testbed-node-0' 2026-04-05 01:20:16.767870 | orchestrator | + echo 2026-04-05 01:20:16.767883 | orchestrator | + osism container testbed-node-0 images 2026-04-05 01:20:18.289550 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2026-04-05 01:20:18.289655 | orchestrator | registry.osism.tech/osism/ceph-daemon reef 9c619feb45c6 22 hours ago 1.35GB 2026-04-05 01:20:18.289694 | orchestrator | registry.osism.tech/kolla/fluentd 2024.2 ad0cbbde9181 23 hours ago 590MB 2026-04-05 01:20:18.289706 | orchestrator | registry.osism.tech/kolla/cron 2024.2 59337077c5b9 23 hours ago 277MB 2026-04-05 01:20:18.289717 | orchestrator | registry.osism.tech/kolla/kolla-toolbox 2024.2 46754da5c988 23 hours ago 679MB 2026-04-05 01:20:18.289728 | orchestrator | registry.osism.tech/kolla/opensearch-dashboards 2024.2 53cb5be99aef 23 hours ago 1.54GB 2026-04-05 01:20:18.289739 | orchestrator | registry.osism.tech/kolla/opensearch 2024.2 19d50baf2114 23 hours ago 1.57GB 2026-04-05 01:20:18.289751 | orchestrator | registry.osism.tech/kolla/keepalived 2024.2 2bc61ec54187 23 hours ago 287MB 2026-04-05 01:20:18.289762 | orchestrator | registry.osism.tech/kolla/grafana 2024.2 4430d8853777 23 hours ago 1.04GB 2026-04-05 01:20:18.289773 | orchestrator | registry.osism.tech/kolla/proxysql 2024.2 496dc5bf4ba8 23 hours ago 427MB 2026-04-05 01:20:18.289834 | orchestrator | registry.osism.tech/kolla/memcached 2024.2 cb2ddb547a10 23 hours ago 277MB 2026-04-05 01:20:18.289848 | orchestrator | registry.osism.tech/kolla/rabbitmq 2024.2 562fa0663663 23 hours ago 333MB 2026-04-05 01:20:18.289858 | orchestrator | registry.osism.tech/kolla/haproxy 2024.2 04ab7c751084 23 hours ago 285MB 2026-04-05 01:20:18.289869 | orchestrator | registry.osism.tech/kolla/prometheus-elasticsearch-exporter 2024.2 f8827e45274c 23 hours ago 303MB 2026-04-05 01:20:18.289880 | orchestrator | registry.osism.tech/kolla/prometheus-node-exporter 2024.2 032fd2da0341 23 hours ago 317MB 2026-04-05 01:20:18.289891 | orchestrator | registry.osism.tech/kolla/prometheus-cadvisor 2024.2 f141a5d170da 23 hours ago 368MB 2026-04-05 01:20:18.289902 | orchestrator | registry.osism.tech/kolla/prometheus-memcached-exporter 2024.2 c9a9bb739f53 23 hours ago 309MB 2026-04-05 01:20:18.289913 | orchestrator | registry.osism.tech/kolla/prometheus-mysqld-exporter 2024.2 a226e99a65d8 23 hours ago 312MB 2026-04-05 01:20:18.289924 | orchestrator | registry.osism.tech/kolla/openvswitch-vswitchd 2024.2 b75d3f0bc6cd 23 hours ago 290MB 2026-04-05 01:20:18.289935 | orchestrator | registry.osism.tech/kolla/openvswitch-db-server 2024.2 1c302eb74111 23 hours ago 290MB 2026-04-05 01:20:18.289946 | orchestrator | registry.osism.tech/kolla/redis-sentinel 2024.2 7090fab32431 23 hours ago 284MB 2026-04-05 01:20:18.289957 | orchestrator | registry.osism.tech/kolla/redis 2024.2 a70d86539888 23 hours ago 284MB 2026-04-05 01:20:18.289967 | orchestrator | registry.osism.tech/kolla/mariadb-server 2024.2 8962d87f9352 23 hours ago 463MB 2026-04-05 01:20:18.289978 | orchestrator | registry.osism.tech/kolla/horizon 2024.2 a1e75dff8bc4 23 hours ago 1.16GB 2026-04-05 01:20:18.289989 | orchestrator | registry.osism.tech/kolla/ovn-controller 2024.2 483d205bb699 23 hours ago 851MB 2026-04-05 01:20:18.290000 | orchestrator | registry.osism.tech/kolla/ovn-sb-db-server 2024.2 86469296f423 23 hours ago 851MB 2026-04-05 01:20:18.290011 | orchestrator | registry.osism.tech/kolla/ovn-northd 2024.2 44b3cb34da7a 23 hours ago 851MB 2026-04-05 01:20:18.290063 | orchestrator | registry.osism.tech/kolla/ovn-nb-db-server 2024.2 1d7ff124f857 23 hours ago 851MB 2026-04-05 01:20:18.290075 | orchestrator | registry.osism.tech/kolla/ceilometer-notification 2024.2 83f2542c5656 23 hours ago 987MB 2026-04-05 01:20:18.290086 | orchestrator | registry.osism.tech/kolla/ceilometer-central 2024.2 e9b28de3522c 23 hours ago 987MB 2026-04-05 01:20:18.290096 | orchestrator | registry.osism.tech/kolla/magnum-api 2024.2 97c98331201a 23 hours ago 1.14GB 2026-04-05 01:20:18.290107 | orchestrator | registry.osism.tech/kolla/magnum-conductor 2024.2 0afc71765fa4 23 hours ago 1.25GB 2026-04-05 01:20:18.290118 | orchestrator | registry.osism.tech/kolla/placement-api 2024.2 2d516175da16 23 hours ago 987MB 2026-04-05 01:20:18.290129 | orchestrator | registry.osism.tech/kolla/keystone 2024.2 0dfb512e46c5 23 hours ago 1.08GB 2026-04-05 01:20:18.290139 | orchestrator | registry.osism.tech/kolla/keystone-ssh 2024.2 63bdeb2ebc47 23 hours ago 1.05GB 2026-04-05 01:20:18.290150 | orchestrator | registry.osism.tech/kolla/keystone-fernet 2024.2 2921b2dd08f4 23 hours ago 1.05GB 2026-04-05 01:20:18.290161 | orchestrator | registry.osism.tech/kolla/designate-api 2024.2 cde3242db83b 23 hours ago 995MB 2026-04-05 01:20:18.290199 | orchestrator | registry.osism.tech/kolla/designate-mdns 2024.2 0a2d4cd0bd6b 23 hours ago 995MB 2026-04-05 01:20:18.290211 | orchestrator | registry.osism.tech/kolla/designate-central 2024.2 4acd83ad59ad 23 hours ago 994MB 2026-04-05 01:20:18.290222 | orchestrator | registry.osism.tech/kolla/designate-worker 2024.2 be23fc2fb920 23 hours ago 1e+03MB 2026-04-05 01:20:18.290242 | orchestrator | registry.osism.tech/kolla/designate-producer 2024.2 6fc5b5164791 23 hours ago 995MB 2026-04-05 01:20:18.290257 | orchestrator | registry.osism.tech/kolla/designate-backend-bind9 2024.2 c6423fce5b77 23 hours ago 1e+03MB 2026-04-05 01:20:18.290276 | orchestrator | registry.osism.tech/kolla/octavia-health-manager 2024.2 17bcdb15a481 23 hours ago 1.04GB 2026-04-05 01:20:18.290294 | orchestrator | registry.osism.tech/kolla/octavia-api 2024.2 8ca0b197bf6e 23 hours ago 1.06GB 2026-04-05 01:20:18.290312 | orchestrator | registry.osism.tech/kolla/octavia-housekeeping 2024.2 3a92408dba9d 23 hours ago 1.04GB 2026-04-05 01:20:18.290331 | orchestrator | registry.osism.tech/kolla/octavia-worker 2024.2 8ab782831c60 23 hours ago 1.04GB 2026-04-05 01:20:18.290350 | orchestrator | registry.osism.tech/kolla/octavia-driver-agent 2024.2 a6753f4ba9a2 23 hours ago 1.06GB 2026-04-05 01:20:18.290362 | orchestrator | registry.osism.tech/kolla/barbican-worker 2024.2 3dee3957357a 23 hours ago 1GB 2026-04-05 01:20:18.290372 | orchestrator | registry.osism.tech/kolla/barbican-keystone-listener 2024.2 47296cf89525 23 hours ago 1GB 2026-04-05 01:20:18.290383 | orchestrator | registry.osism.tech/kolla/barbican-api 2024.2 1d6d20a1d5ae 23 hours ago 1GB 2026-04-05 01:20:18.290394 | orchestrator | registry.osism.tech/kolla/skyline-console 2024.2 96c343e0ac57 23 hours ago 1.06GB 2026-04-05 01:20:18.290404 | orchestrator | registry.osism.tech/kolla/skyline-apiserver 2024.2 83a53e1f6b8e 23 hours ago 1GB 2026-04-05 01:20:18.290415 | orchestrator | registry.osism.tech/kolla/nova-api 2024.2 b640fd3b5545 23 hours ago 1.22GB 2026-04-05 01:20:18.290426 | orchestrator | registry.osism.tech/kolla/nova-conductor 2024.2 0d09af86e625 23 hours ago 1.22GB 2026-04-05 01:20:18.290437 | orchestrator | registry.osism.tech/kolla/nova-novncproxy 2024.2 1748b47df461 23 hours ago 1.38GB 2026-04-05 01:20:18.290448 | orchestrator | registry.osism.tech/kolla/nova-scheduler 2024.2 af0b666d7896 23 hours ago 1.22GB 2026-04-05 01:20:18.290458 | orchestrator | registry.osism.tech/kolla/aodh-api 2024.2 22b1bb43b011 23 hours ago 984MB 2026-04-05 01:20:18.290469 | orchestrator | registry.osism.tech/kolla/aodh-evaluator 2024.2 cd5ec0b6376d 23 hours ago 985MB 2026-04-05 01:20:18.290479 | orchestrator | registry.osism.tech/kolla/aodh-notifier 2024.2 12132f25d37c 23 hours ago 985MB 2026-04-05 01:20:18.290497 | orchestrator | registry.osism.tech/kolla/aodh-listener 2024.2 99d34aa164f6 23 hours ago 985MB 2026-04-05 01:20:18.290508 | orchestrator | registry.osism.tech/kolla/neutron-server 2024.2 0d757bb8f635 23 hours ago 1.17GB 2026-04-05 01:20:18.290518 | orchestrator | registry.osism.tech/kolla/glance-api 2024.2 ad339f2d0098 23 hours ago 1.11GB 2026-04-05 01:20:18.290529 | orchestrator | registry.osism.tech/kolla/cinder-backup 2024.2 4ff43697e718 23 hours ago 1.42GB 2026-04-05 01:20:18.290540 | orchestrator | registry.osism.tech/kolla/cinder-volume 2024.2 18e876527ba0 23 hours ago 1.73GB 2026-04-05 01:20:18.290550 | orchestrator | registry.osism.tech/kolla/cinder-api 2024.2 b7c066d53585 23 hours ago 1.42GB 2026-04-05 01:20:18.290561 | orchestrator | registry.osism.tech/kolla/cinder-scheduler 2024.2 77a22692e243 23 hours ago 1.42GB 2026-04-05 01:20:18.443210 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2026-04-05 01:20:18.444232 | orchestrator | ++ semver latest 5.0.0 2026-04-05 01:20:18.487905 | orchestrator | 2026-04-05 01:20:18.487996 | orchestrator | ## Containers @ testbed-node-1 2026-04-05 01:20:18.488011 | orchestrator | 2026-04-05 01:20:18.488023 | orchestrator | + [[ -1 -eq -1 ]] 2026-04-05 01:20:18.488035 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2026-04-05 01:20:18.488047 | orchestrator | + echo 2026-04-05 01:20:18.488085 | orchestrator | + echo '## Containers @ testbed-node-1' 2026-04-05 01:20:18.488098 | orchestrator | + echo 2026-04-05 01:20:18.488110 | orchestrator | + osism container testbed-node-1 ps 2026-04-05 01:20:20.043056 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2026-04-05 01:20:20.043160 | orchestrator | 4f5e840f423b registry.osism.tech/kolla/octavia-worker:2024.2 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_worker 2026-04-05 01:20:20.043177 | orchestrator | 9866dafb9184 registry.osism.tech/kolla/octavia-housekeeping:2024.2 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_housekeeping 2026-04-05 01:20:20.043187 | orchestrator | cf06c0b2cec8 registry.osism.tech/kolla/octavia-health-manager:2024.2 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_health_manager 2026-04-05 01:20:20.043197 | orchestrator | 04982f59f499 registry.osism.tech/kolla/octavia-driver-agent:2024.2 "dumb-init --single-…" 5 minutes ago Up 5 minutes octavia_driver_agent 2026-04-05 01:20:20.043208 | orchestrator | 757a4ea05ab6 registry.osism.tech/kolla/octavia-api:2024.2 "dumb-init --single-…" 5 minutes ago Up 5 minutes (healthy) octavia_api 2026-04-05 01:20:20.043217 | orchestrator | f1de72d9445a registry.osism.tech/kolla/grafana:2024.2 "dumb-init --single-…" 7 minutes ago Up 7 minutes grafana 2026-04-05 01:20:20.043226 | orchestrator | ba2837881a46 registry.osism.tech/kolla/magnum-conductor:2024.2 "dumb-init --single-…" 8 minutes ago Up 8 minutes (healthy) magnum_conductor 2026-04-05 01:20:20.043236 | orchestrator | 7b95543691e8 registry.osism.tech/kolla/magnum-api:2024.2 "dumb-init --single-…" 8 minutes ago Up 8 minutes (healthy) magnum_api 2026-04-05 01:20:20.043251 | orchestrator | af89eda9080f registry.osism.tech/kolla/placement-api:2024.2 "dumb-init --single-…" 10 minutes ago Up 9 minutes (healthy) placement_api 2026-04-05 01:20:20.043261 | orchestrator | 4c412f909ab1 registry.osism.tech/kolla/nova-novncproxy:2024.2 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) nova_novncproxy 2026-04-05 01:20:20.043271 | orchestrator | 1468bcb1c3f5 registry.osism.tech/kolla/nova-conductor:2024.2 "dumb-init --single-…" 10 minutes ago Up 9 minutes (healthy) nova_conductor 2026-04-05 01:20:20.043281 | orchestrator | bc0a389d7388 registry.osism.tech/kolla/designate-worker:2024.2 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) designate_worker 2026-04-05 01:20:20.043290 | orchestrator | 44ef53244b77 registry.osism.tech/kolla/designate-mdns:2024.2 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) designate_mdns 2026-04-05 01:20:20.043300 | orchestrator | 37c046496efc registry.osism.tech/kolla/neutron-server:2024.2 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) neutron_server 2026-04-05 01:20:20.043330 | orchestrator | 0a83cc8ea4a5 registry.osism.tech/kolla/designate-producer:2024.2 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) designate_producer 2026-04-05 01:20:20.043340 | orchestrator | 199b4e7b2e15 registry.osism.tech/kolla/designate-central:2024.2 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) designate_central 2026-04-05 01:20:20.043350 | orchestrator | b494716156cd registry.osism.tech/kolla/designate-api:2024.2 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) designate_api 2026-04-05 01:20:20.043360 | orchestrator | 8e10b71aec31 registry.osism.tech/kolla/designate-backend-bind9:2024.2 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) designate_backend_bind9 2026-04-05 01:20:20.043390 | orchestrator | f326a00d22d7 registry.osism.tech/kolla/barbican-worker:2024.2 "dumb-init --single-…" 12 minutes ago Up 12 minutes (healthy) barbican_worker 2026-04-05 01:20:20.043401 | orchestrator | 96ac27f443f9 registry.osism.tech/kolla/nova-api:2024.2 "dumb-init --single-…" 12 minutes ago Up 12 minutes (healthy) nova_api 2026-04-05 01:20:20.043411 | orchestrator | a7783d7fb083 registry.osism.tech/kolla/barbican-keystone-listener:2024.2 "dumb-init --single-…" 12 minutes ago Up 12 minutes (healthy) barbican_keystone_listener 2026-04-05 01:20:20.043440 | orchestrator | 72c6bd66964c registry.osism.tech/kolla/nova-scheduler:2024.2 "dumb-init --single-…" 12 minutes ago Up 9 minutes (healthy) nova_scheduler 2026-04-05 01:20:20.043451 | orchestrator | c0237b45a824 registry.osism.tech/kolla/barbican-api:2024.2 "dumb-init --single-…" 12 minutes ago Up 12 minutes (healthy) barbican_api 2026-04-05 01:20:20.043462 | orchestrator | 53cb207568b9 registry.osism.tech/kolla/cinder-backup:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) cinder_backup 2026-04-05 01:20:20.043472 | orchestrator | f3b4a2794d40 registry.osism.tech/kolla/cinder-volume:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) cinder_volume 2026-04-05 01:20:20.043482 | orchestrator | 4a6e45cd6329 registry.osism.tech/kolla/glance-api:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) glance_api 2026-04-05 01:20:20.043492 | orchestrator | 45b420d6329c registry.osism.tech/kolla/cinder-scheduler:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) cinder_scheduler 2026-04-05 01:20:20.043502 | orchestrator | 7337ae42cf15 registry.osism.tech/kolla/cinder-api:2024.2 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) cinder_api 2026-04-05 01:20:20.043512 | orchestrator | f357ca21088f registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2 "dumb-init --single-…" 15 minutes ago Up 15 minutes prometheus_elasticsearch_exporter 2026-04-05 01:20:20.043522 | orchestrator | aeb3afc82ce4 registry.osism.tech/kolla/prometheus-cadvisor:2024.2 "dumb-init --single-…" 15 minutes ago Up 15 minutes prometheus_cadvisor 2026-04-05 01:20:20.043532 | orchestrator | eb1b02e10a14 registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2 "dumb-init --single-…" 15 minutes ago Up 15 minutes prometheus_memcached_exporter 2026-04-05 01:20:20.043541 | orchestrator | 41786eef7408 registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2 "dumb-init --single-…" 15 minutes ago Up 15 minutes prometheus_mysqld_exporter 2026-04-05 01:20:20.043551 | orchestrator | b0b1e10dc7a4 registry.osism.tech/kolla/prometheus-node-exporter:2024.2 "dumb-init --single-…" 16 minutes ago Up 16 minutes prometheus_node_exporter 2026-04-05 01:20:20.043560 | orchestrator | b0f128194dfd registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-mgr -…" 17 minutes ago Up 17 minutes ceph-mgr-testbed-node-1 2026-04-05 01:20:20.043570 | orchestrator | d4c101f7f9a9 registry.osism.tech/kolla/keystone:2024.2 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) keystone 2026-04-05 01:20:20.043580 | orchestrator | 460c234a6517 registry.osism.tech/kolla/keystone-fernet:2024.2 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) keystone_fernet 2026-04-05 01:20:20.043590 | orchestrator | 792f30388ce0 registry.osism.tech/kolla/horizon:2024.2 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) horizon 2026-04-05 01:20:20.043606 | orchestrator | b7c69d980da6 registry.osism.tech/kolla/keystone-ssh:2024.2 "dumb-init --single-…" 19 minutes ago Up 19 minutes (healthy) keystone_ssh 2026-04-05 01:20:20.043635 | orchestrator | 8f0fa2717c04 registry.osism.tech/kolla/opensearch-dashboards:2024.2 "dumb-init --single-…" 20 minutes ago Up 20 minutes (healthy) opensearch_dashboards 2026-04-05 01:20:20.043646 | orchestrator | 840392d954d0 registry.osism.tech/kolla/mariadb-server:2024.2 "dumb-init -- kolla_…" 22 minutes ago Up 22 minutes (healthy) mariadb 2026-04-05 01:20:20.043657 | orchestrator | b0a4a682b129 registry.osism.tech/kolla/opensearch:2024.2 "dumb-init --single-…" 22 minutes ago Up 22 minutes (healthy) opensearch 2026-04-05 01:20:20.043667 | orchestrator | 06bac80ea192 registry.osism.tech/kolla/keepalived:2024.2 "dumb-init --single-…" 24 minutes ago Up 23 minutes keepalived 2026-04-05 01:20:20.043677 | orchestrator | e5f55814a9e9 registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-crash" 24 minutes ago Up 24 minutes ceph-crash-testbed-node-1 2026-04-05 01:20:20.043687 | orchestrator | ec907af5c3b7 registry.osism.tech/kolla/proxysql:2024.2 "dumb-init --single-…" 24 minutes ago Up 24 minutes (healthy) proxysql 2026-04-05 01:20:20.043705 | orchestrator | 3bdad57386ac registry.osism.tech/kolla/haproxy:2024.2 "dumb-init --single-…" 24 minutes ago Up 24 minutes (healthy) haproxy 2026-04-05 01:20:20.043715 | orchestrator | 97db4a41bf47 registry.osism.tech/kolla/ovn-northd:2024.2 "dumb-init --single-…" 27 minutes ago Up 27 minutes ovn_northd 2026-04-05 01:20:20.043725 | orchestrator | a828aef9d43a registry.osism.tech/kolla/ovn-sb-db-server:2024.2 "dumb-init --single-…" 27 minutes ago Up 27 minutes ovn_sb_db 2026-04-05 01:20:20.043734 | orchestrator | 41d9fa62a25a registry.osism.tech/kolla/ovn-nb-db-server:2024.2 "dumb-init --single-…" 27 minutes ago Up 27 minutes ovn_nb_db 2026-04-05 01:20:20.043745 | orchestrator | e49deade5e91 registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-mon -…" 28 minutes ago Up 28 minutes ceph-mon-testbed-node-1 2026-04-05 01:20:20.043754 | orchestrator | f0d09d48e01d registry.osism.tech/kolla/ovn-controller:2024.2 "dumb-init --single-…" 28 minutes ago Up 28 minutes ovn_controller 2026-04-05 01:20:20.043764 | orchestrator | 1e4ee119734e registry.osism.tech/kolla/rabbitmq:2024.2 "dumb-init --single-…" 28 minutes ago Up 28 minutes (healthy) rabbitmq 2026-04-05 01:20:20.043773 | orchestrator | 2372f00fa7ec registry.osism.tech/kolla/openvswitch-vswitchd:2024.2 "dumb-init --single-…" 30 minutes ago Up 29 minutes (healthy) openvswitch_vswitchd 2026-04-05 01:20:20.043783 | orchestrator | 769ea6254aa3 registry.osism.tech/kolla/openvswitch-db-server:2024.2 "dumb-init --single-…" 30 minutes ago Up 30 minutes (healthy) openvswitch_db 2026-04-05 01:20:20.043793 | orchestrator | eefd437317b5 registry.osism.tech/kolla/redis-sentinel:2024.2 "dumb-init --single-…" 30 minutes ago Up 30 minutes (healthy) redis_sentinel 2026-04-05 01:20:20.043803 | orchestrator | fc6871569ad3 registry.osism.tech/kolla/redis:2024.2 "dumb-init --single-…" 30 minutes ago Up 30 minutes (healthy) redis 2026-04-05 01:20:20.043833 | orchestrator | c091e8fca7e8 registry.osism.tech/kolla/memcached:2024.2 "dumb-init --single-…" 30 minutes ago Up 30 minutes (healthy) memcached 2026-04-05 01:20:20.043843 | orchestrator | b1a8e83595bb registry.osism.tech/kolla/cron:2024.2 "dumb-init --single-…" 31 minutes ago Up 31 minutes cron 2026-04-05 01:20:20.043853 | orchestrator | c0a794eb7aa5 registry.osism.tech/kolla/kolla-toolbox:2024.2 "dumb-init --single-…" 31 minutes ago Up 31 minutes kolla_toolbox 2026-04-05 01:20:20.043869 | orchestrator | e8490b55d874 registry.osism.tech/kolla/fluentd:2024.2 "dumb-init --single-…" 31 minutes ago Up 31 minutes fluentd 2026-04-05 01:20:20.217080 | orchestrator | 2026-04-05 01:20:20.217166 | orchestrator | ## Images @ testbed-node-1 2026-04-05 01:20:20.217181 | orchestrator | 2026-04-05 01:20:20.217191 | orchestrator | + echo 2026-04-05 01:20:20.217200 | orchestrator | + echo '## Images @ testbed-node-1' 2026-04-05 01:20:20.217209 | orchestrator | + echo 2026-04-05 01:20:20.217219 | orchestrator | + osism container testbed-node-1 images 2026-04-05 01:20:21.732064 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2026-04-05 01:20:21.732153 | orchestrator | registry.osism.tech/osism/ceph-daemon reef 9c619feb45c6 22 hours ago 1.35GB 2026-04-05 01:20:21.732174 | orchestrator | registry.osism.tech/kolla/fluentd 2024.2 ad0cbbde9181 23 hours ago 590MB 2026-04-05 01:20:21.732191 | orchestrator | registry.osism.tech/kolla/cron 2024.2 59337077c5b9 23 hours ago 277MB 2026-04-05 01:20:21.732203 | orchestrator | registry.osism.tech/kolla/kolla-toolbox 2024.2 46754da5c988 23 hours ago 679MB 2026-04-05 01:20:21.732215 | orchestrator | registry.osism.tech/kolla/opensearch-dashboards 2024.2 53cb5be99aef 23 hours ago 1.54GB 2026-04-05 01:20:21.732227 | orchestrator | registry.osism.tech/kolla/opensearch 2024.2 19d50baf2114 23 hours ago 1.57GB 2026-04-05 01:20:21.732259 | orchestrator | registry.osism.tech/kolla/keepalived 2024.2 2bc61ec54187 23 hours ago 287MB 2026-04-05 01:20:21.732273 | orchestrator | registry.osism.tech/kolla/grafana 2024.2 4430d8853777 23 hours ago 1.04GB 2026-04-05 01:20:21.732285 | orchestrator | registry.osism.tech/kolla/proxysql 2024.2 496dc5bf4ba8 23 hours ago 427MB 2026-04-05 01:20:21.732297 | orchestrator | registry.osism.tech/kolla/memcached 2024.2 cb2ddb547a10 23 hours ago 277MB 2026-04-05 01:20:21.732314 | orchestrator | registry.osism.tech/kolla/rabbitmq 2024.2 562fa0663663 23 hours ago 333MB 2026-04-05 01:20:21.732327 | orchestrator | registry.osism.tech/kolla/haproxy 2024.2 04ab7c751084 23 hours ago 285MB 2026-04-05 01:20:21.732339 | orchestrator | registry.osism.tech/kolla/prometheus-elasticsearch-exporter 2024.2 f8827e45274c 23 hours ago 303MB 2026-04-05 01:20:21.732352 | orchestrator | registry.osism.tech/kolla/prometheus-node-exporter 2024.2 032fd2da0341 23 hours ago 317MB 2026-04-05 01:20:21.732365 | orchestrator | registry.osism.tech/kolla/prometheus-cadvisor 2024.2 f141a5d170da 23 hours ago 368MB 2026-04-05 01:20:21.732378 | orchestrator | registry.osism.tech/kolla/prometheus-memcached-exporter 2024.2 c9a9bb739f53 23 hours ago 309MB 2026-04-05 01:20:21.732390 | orchestrator | registry.osism.tech/kolla/prometheus-mysqld-exporter 2024.2 a226e99a65d8 23 hours ago 312MB 2026-04-05 01:20:21.732400 | orchestrator | registry.osism.tech/kolla/openvswitch-vswitchd 2024.2 b75d3f0bc6cd 23 hours ago 290MB 2026-04-05 01:20:21.732408 | orchestrator | registry.osism.tech/kolla/openvswitch-db-server 2024.2 1c302eb74111 23 hours ago 290MB 2026-04-05 01:20:21.732415 | orchestrator | registry.osism.tech/kolla/redis-sentinel 2024.2 7090fab32431 23 hours ago 284MB 2026-04-05 01:20:21.732423 | orchestrator | registry.osism.tech/kolla/redis 2024.2 a70d86539888 23 hours ago 284MB 2026-04-05 01:20:21.732430 | orchestrator | registry.osism.tech/kolla/mariadb-server 2024.2 8962d87f9352 23 hours ago 463MB 2026-04-05 01:20:21.732437 | orchestrator | registry.osism.tech/kolla/horizon 2024.2 a1e75dff8bc4 23 hours ago 1.16GB 2026-04-05 01:20:21.732445 | orchestrator | registry.osism.tech/kolla/ovn-controller 2024.2 483d205bb699 23 hours ago 851MB 2026-04-05 01:20:21.732470 | orchestrator | registry.osism.tech/kolla/ovn-sb-db-server 2024.2 86469296f423 23 hours ago 851MB 2026-04-05 01:20:21.732477 | orchestrator | registry.osism.tech/kolla/ovn-northd 2024.2 44b3cb34da7a 23 hours ago 851MB 2026-04-05 01:20:21.732485 | orchestrator | registry.osism.tech/kolla/ovn-nb-db-server 2024.2 1d7ff124f857 23 hours ago 851MB 2026-04-05 01:20:21.732492 | orchestrator | registry.osism.tech/kolla/magnum-api 2024.2 97c98331201a 23 hours ago 1.14GB 2026-04-05 01:20:21.732499 | orchestrator | registry.osism.tech/kolla/magnum-conductor 2024.2 0afc71765fa4 23 hours ago 1.25GB 2026-04-05 01:20:21.732506 | orchestrator | registry.osism.tech/kolla/placement-api 2024.2 2d516175da16 23 hours ago 987MB 2026-04-05 01:20:21.732514 | orchestrator | registry.osism.tech/kolla/keystone 2024.2 0dfb512e46c5 23 hours ago 1.08GB 2026-04-05 01:20:21.732522 | orchestrator | registry.osism.tech/kolla/keystone-ssh 2024.2 63bdeb2ebc47 23 hours ago 1.05GB 2026-04-05 01:20:21.732529 | orchestrator | registry.osism.tech/kolla/keystone-fernet 2024.2 2921b2dd08f4 23 hours ago 1.05GB 2026-04-05 01:20:21.732536 | orchestrator | registry.osism.tech/kolla/designate-api 2024.2 cde3242db83b 23 hours ago 995MB 2026-04-05 01:20:21.732544 | orchestrator | registry.osism.tech/kolla/designate-mdns 2024.2 0a2d4cd0bd6b 23 hours ago 995MB 2026-04-05 01:20:21.732551 | orchestrator | registry.osism.tech/kolla/designate-central 2024.2 4acd83ad59ad 23 hours ago 994MB 2026-04-05 01:20:21.732576 | orchestrator | registry.osism.tech/kolla/designate-worker 2024.2 be23fc2fb920 23 hours ago 1e+03MB 2026-04-05 01:20:21.732584 | orchestrator | registry.osism.tech/kolla/designate-producer 2024.2 6fc5b5164791 23 hours ago 995MB 2026-04-05 01:20:21.732591 | orchestrator | registry.osism.tech/kolla/designate-backend-bind9 2024.2 c6423fce5b77 23 hours ago 1e+03MB 2026-04-05 01:20:21.732598 | orchestrator | registry.osism.tech/kolla/octavia-health-manager 2024.2 17bcdb15a481 23 hours ago 1.04GB 2026-04-05 01:20:21.732605 | orchestrator | registry.osism.tech/kolla/octavia-api 2024.2 8ca0b197bf6e 23 hours ago 1.06GB 2026-04-05 01:20:21.732614 | orchestrator | registry.osism.tech/kolla/octavia-housekeeping 2024.2 3a92408dba9d 23 hours ago 1.04GB 2026-04-05 01:20:21.732622 | orchestrator | registry.osism.tech/kolla/octavia-worker 2024.2 8ab782831c60 23 hours ago 1.04GB 2026-04-05 01:20:21.732631 | orchestrator | registry.osism.tech/kolla/octavia-driver-agent 2024.2 a6753f4ba9a2 23 hours ago 1.06GB 2026-04-05 01:20:21.732640 | orchestrator | registry.osism.tech/kolla/barbican-worker 2024.2 3dee3957357a 23 hours ago 1GB 2026-04-05 01:20:21.732648 | orchestrator | registry.osism.tech/kolla/barbican-keystone-listener 2024.2 47296cf89525 23 hours ago 1GB 2026-04-05 01:20:21.732656 | orchestrator | registry.osism.tech/kolla/barbican-api 2024.2 1d6d20a1d5ae 23 hours ago 1GB 2026-04-05 01:20:21.732665 | orchestrator | registry.osism.tech/kolla/nova-api 2024.2 b640fd3b5545 23 hours ago 1.22GB 2026-04-05 01:20:21.732673 | orchestrator | registry.osism.tech/kolla/nova-conductor 2024.2 0d09af86e625 23 hours ago 1.22GB 2026-04-05 01:20:21.732682 | orchestrator | registry.osism.tech/kolla/nova-novncproxy 2024.2 1748b47df461 23 hours ago 1.38GB 2026-04-05 01:20:21.732690 | orchestrator | registry.osism.tech/kolla/nova-scheduler 2024.2 af0b666d7896 23 hours ago 1.22GB 2026-04-05 01:20:21.732698 | orchestrator | registry.osism.tech/kolla/neutron-server 2024.2 0d757bb8f635 23 hours ago 1.17GB 2026-04-05 01:20:21.732707 | orchestrator | registry.osism.tech/kolla/glance-api 2024.2 ad339f2d0098 23 hours ago 1.11GB 2026-04-05 01:20:21.732715 | orchestrator | registry.osism.tech/kolla/cinder-backup 2024.2 4ff43697e718 23 hours ago 1.42GB 2026-04-05 01:20:21.732728 | orchestrator | registry.osism.tech/kolla/cinder-volume 2024.2 18e876527ba0 23 hours ago 1.73GB 2026-04-05 01:20:21.732742 | orchestrator | registry.osism.tech/kolla/cinder-api 2024.2 b7c066d53585 23 hours ago 1.42GB 2026-04-05 01:20:21.732751 | orchestrator | registry.osism.tech/kolla/cinder-scheduler 2024.2 77a22692e243 23 hours ago 1.42GB 2026-04-05 01:20:21.888307 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2026-04-05 01:20:21.888417 | orchestrator | ++ semver latest 5.0.0 2026-04-05 01:20:21.943591 | orchestrator | 2026-04-05 01:20:21.943679 | orchestrator | ## Containers @ testbed-node-2 2026-04-05 01:20:21.943695 | orchestrator | 2026-04-05 01:20:21.943707 | orchestrator | + [[ -1 -eq -1 ]] 2026-04-05 01:20:21.943718 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2026-04-05 01:20:21.943730 | orchestrator | + echo 2026-04-05 01:20:21.943742 | orchestrator | + echo '## Containers @ testbed-node-2' 2026-04-05 01:20:21.943754 | orchestrator | + echo 2026-04-05 01:20:21.943765 | orchestrator | + osism container testbed-node-2 ps 2026-04-05 01:20:23.481220 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2026-04-05 01:20:23.481360 | orchestrator | 9a15da295f2b registry.osism.tech/kolla/octavia-worker:2024.2 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_worker 2026-04-05 01:20:23.481387 | orchestrator | fb0114fef9a1 registry.osism.tech/kolla/octavia-housekeeping:2024.2 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_housekeeping 2026-04-05 01:20:23.482416 | orchestrator | d34b0f32d1e7 registry.osism.tech/kolla/octavia-health-manager:2024.2 "dumb-init --single-…" 5 minutes ago Up 4 minutes (healthy) octavia_health_manager 2026-04-05 01:20:23.482515 | orchestrator | 4e3e93c7bfda registry.osism.tech/kolla/octavia-driver-agent:2024.2 "dumb-init --single-…" 5 minutes ago Up 5 minutes octavia_driver_agent 2026-04-05 01:20:23.482538 | orchestrator | 94133adfb896 registry.osism.tech/kolla/octavia-api:2024.2 "dumb-init --single-…" 5 minutes ago Up 5 minutes (healthy) octavia_api 2026-04-05 01:20:23.482552 | orchestrator | 2e5929df0efd registry.osism.tech/kolla/grafana:2024.2 "dumb-init --single-…" 7 minutes ago Up 7 minutes grafana 2026-04-05 01:20:23.482563 | orchestrator | 574be292aa3b registry.osism.tech/kolla/magnum-conductor:2024.2 "dumb-init --single-…" 8 minutes ago Up 8 minutes (healthy) magnum_conductor 2026-04-05 01:20:23.482575 | orchestrator | 4a129a073b5a registry.osism.tech/kolla/magnum-api:2024.2 "dumb-init --single-…" 9 minutes ago Up 8 minutes (healthy) magnum_api 2026-04-05 01:20:23.482585 | orchestrator | fdbced6fbaa9 registry.osism.tech/kolla/placement-api:2024.2 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) placement_api 2026-04-05 01:20:23.482596 | orchestrator | 29074a70bc9d registry.osism.tech/kolla/nova-novncproxy:2024.2 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) nova_novncproxy 2026-04-05 01:20:23.482607 | orchestrator | 4084d8608be8 registry.osism.tech/kolla/nova-conductor:2024.2 "dumb-init --single-…" 10 minutes ago Up 9 minutes (healthy) nova_conductor 2026-04-05 01:20:23.482617 | orchestrator | c1a3dfe1a263 registry.osism.tech/kolla/designate-worker:2024.2 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) designate_worker 2026-04-05 01:20:23.482628 | orchestrator | fbef7175e0ef registry.osism.tech/kolla/neutron-server:2024.2 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) neutron_server 2026-04-05 01:20:23.482639 | orchestrator | 5459f011d835 registry.osism.tech/kolla/designate-mdns:2024.2 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) designate_mdns 2026-04-05 01:20:23.482683 | orchestrator | c08db5f984e4 registry.osism.tech/kolla/designate-producer:2024.2 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) designate_producer 2026-04-05 01:20:23.482703 | orchestrator | 93ba8332b162 registry.osism.tech/kolla/designate-central:2024.2 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) designate_central 2026-04-05 01:20:23.482722 | orchestrator | 4f61aba2cb8d registry.osism.tech/kolla/designate-api:2024.2 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) designate_api 2026-04-05 01:20:23.482740 | orchestrator | 983fad89998e registry.osism.tech/kolla/designate-backend-bind9:2024.2 "dumb-init --single-…" 12 minutes ago Up 11 minutes (healthy) designate_backend_bind9 2026-04-05 01:20:23.482759 | orchestrator | 997288fd2a08 registry.osism.tech/kolla/barbican-worker:2024.2 "dumb-init --single-…" 12 minutes ago Up 12 minutes (healthy) barbican_worker 2026-04-05 01:20:23.482778 | orchestrator | 7a9bd0eb9c37 registry.osism.tech/kolla/nova-api:2024.2 "dumb-init --single-…" 12 minutes ago Up 12 minutes (healthy) nova_api 2026-04-05 01:20:23.482796 | orchestrator | 1bdea4c39c14 registry.osism.tech/kolla/barbican-keystone-listener:2024.2 "dumb-init --single-…" 12 minutes ago Up 12 minutes (healthy) barbican_keystone_listener 2026-04-05 01:20:23.482898 | orchestrator | 8a69cc05dfbf registry.osism.tech/kolla/nova-scheduler:2024.2 "dumb-init --single-…" 12 minutes ago Up 9 minutes (healthy) nova_scheduler 2026-04-05 01:20:23.482912 | orchestrator | 7552486739d8 registry.osism.tech/kolla/barbican-api:2024.2 "dumb-init --single-…" 12 minutes ago Up 12 minutes (healthy) barbican_api 2026-04-05 01:20:23.482923 | orchestrator | fe1518471f4a registry.osism.tech/kolla/cinder-backup:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) cinder_backup 2026-04-05 01:20:23.482934 | orchestrator | 70c26d1e083f registry.osism.tech/kolla/cinder-volume:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) cinder_volume 2026-04-05 01:20:23.482945 | orchestrator | 129a4012a069 registry.osism.tech/kolla/glance-api:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) glance_api 2026-04-05 01:20:23.482956 | orchestrator | 3f91f22e43f3 registry.osism.tech/kolla/cinder-scheduler:2024.2 "dumb-init --single-…" 15 minutes ago Up 14 minutes (healthy) cinder_scheduler 2026-04-05 01:20:23.482967 | orchestrator | 5a56acc7f169 registry.osism.tech/kolla/cinder-api:2024.2 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) cinder_api 2026-04-05 01:20:23.482978 | orchestrator | d6255a978869 registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2 "dumb-init --single-…" 15 minutes ago Up 15 minutes prometheus_elasticsearch_exporter 2026-04-05 01:20:23.483009 | orchestrator | 270b22e6764f registry.osism.tech/kolla/prometheus-cadvisor:2024.2 "dumb-init --single-…" 15 minutes ago Up 15 minutes prometheus_cadvisor 2026-04-05 01:20:23.483020 | orchestrator | 6a68f6762f0f registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2 "dumb-init --single-…" 15 minutes ago Up 15 minutes prometheus_memcached_exporter 2026-04-05 01:20:23.483031 | orchestrator | 3b0ab9336a25 registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2 "dumb-init --single-…" 16 minutes ago Up 16 minutes prometheus_mysqld_exporter 2026-04-05 01:20:23.483042 | orchestrator | a837b0b0b01c registry.osism.tech/kolla/prometheus-node-exporter:2024.2 "dumb-init --single-…" 16 minutes ago Up 16 minutes prometheus_node_exporter 2026-04-05 01:20:23.483062 | orchestrator | fc88bac151bd registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-mgr -…" 17 minutes ago Up 17 minutes ceph-mgr-testbed-node-2 2026-04-05 01:20:23.483073 | orchestrator | fc8a11d9b7df registry.osism.tech/kolla/keystone:2024.2 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) keystone 2026-04-05 01:20:23.483084 | orchestrator | 7f190bee7a25 registry.osism.tech/kolla/keystone-fernet:2024.2 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) keystone_fernet 2026-04-05 01:20:23.483095 | orchestrator | 137fc93f9848 registry.osism.tech/kolla/horizon:2024.2 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) horizon 2026-04-05 01:20:23.483111 | orchestrator | 6f635a81131e registry.osism.tech/kolla/keystone-ssh:2024.2 "dumb-init --single-…" 19 minutes ago Up 19 minutes (healthy) keystone_ssh 2026-04-05 01:20:23.483122 | orchestrator | d8f86865a5e3 registry.osism.tech/kolla/opensearch-dashboards:2024.2 "dumb-init --single-…" 20 minutes ago Up 20 minutes (healthy) opensearch_dashboards 2026-04-05 01:20:23.483133 | orchestrator | 69488e02ee1b registry.osism.tech/kolla/mariadb-server:2024.2 "dumb-init -- kolla_…" 21 minutes ago Up 21 minutes (healthy) mariadb 2026-04-05 01:20:23.483144 | orchestrator | 95057b1b13e6 registry.osism.tech/kolla/opensearch:2024.2 "dumb-init --single-…" 22 minutes ago Up 22 minutes (healthy) opensearch 2026-04-05 01:20:23.483155 | orchestrator | e513f673287f registry.osism.tech/kolla/keepalived:2024.2 "dumb-init --single-…" 24 minutes ago Up 24 minutes keepalived 2026-04-05 01:20:23.483166 | orchestrator | 600b49713756 registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-crash" 24 minutes ago Up 24 minutes ceph-crash-testbed-node-2 2026-04-05 01:20:23.483177 | orchestrator | b77f0c260245 registry.osism.tech/kolla/proxysql:2024.2 "dumb-init --single-…" 24 minutes ago Up 24 minutes (healthy) proxysql 2026-04-05 01:20:23.483197 | orchestrator | 9f5b8a9d3839 registry.osism.tech/kolla/haproxy:2024.2 "dumb-init --single-…" 24 minutes ago Up 24 minutes (healthy) haproxy 2026-04-05 01:20:23.483209 | orchestrator | cd575dd5b38b registry.osism.tech/kolla/ovn-northd:2024.2 "dumb-init --single-…" 27 minutes ago Up 27 minutes ovn_northd 2026-04-05 01:20:23.483220 | orchestrator | 9d0ecb3db44e registry.osism.tech/kolla/ovn-sb-db-server:2024.2 "dumb-init --single-…" 27 minutes ago Up 27 minutes ovn_sb_db 2026-04-05 01:20:23.483231 | orchestrator | be6e342201a1 registry.osism.tech/kolla/ovn-nb-db-server:2024.2 "dumb-init --single-…" 28 minutes ago Up 27 minutes ovn_nb_db 2026-04-05 01:20:23.483242 | orchestrator | 6ae87dfa5c87 registry.osism.tech/kolla/rabbitmq:2024.2 "dumb-init --single-…" 28 minutes ago Up 28 minutes (healthy) rabbitmq 2026-04-05 01:20:23.483254 | orchestrator | 80df088073d2 registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-mon -…" 28 minutes ago Up 28 minutes ceph-mon-testbed-node-2 2026-04-05 01:20:23.483265 | orchestrator | fec5ecb195d4 registry.osism.tech/kolla/ovn-controller:2024.2 "dumb-init --single-…" 28 minutes ago Up 28 minutes ovn_controller 2026-04-05 01:20:23.483276 | orchestrator | 5682eeba2c07 registry.osism.tech/kolla/openvswitch-vswitchd:2024.2 "dumb-init --single-…" 30 minutes ago Up 29 minutes (healthy) openvswitch_vswitchd 2026-04-05 01:20:23.483286 | orchestrator | 18614d75b1a5 registry.osism.tech/kolla/openvswitch-db-server:2024.2 "dumb-init --single-…" 30 minutes ago Up 30 minutes (healthy) openvswitch_db 2026-04-05 01:20:23.483305 | orchestrator | 07d6f4030bba registry.osism.tech/kolla/redis-sentinel:2024.2 "dumb-init --single-…" 30 minutes ago Up 30 minutes (healthy) redis_sentinel 2026-04-05 01:20:23.483316 | orchestrator | 2758e58b0044 registry.osism.tech/kolla/redis:2024.2 "dumb-init --single-…" 30 minutes ago Up 30 minutes (healthy) redis 2026-04-05 01:20:23.483328 | orchestrator | 54abb639ad06 registry.osism.tech/kolla/memcached:2024.2 "dumb-init --single-…" 30 minutes ago Up 30 minutes (healthy) memcached 2026-04-05 01:20:23.483339 | orchestrator | 8668a4a3a382 registry.osism.tech/kolla/cron:2024.2 "dumb-init --single-…" 31 minutes ago Up 31 minutes cron 2026-04-05 01:20:23.483351 | orchestrator | e49310471dfc registry.osism.tech/kolla/kolla-toolbox:2024.2 "dumb-init --single-…" 31 minutes ago Up 31 minutes kolla_toolbox 2026-04-05 01:20:23.483362 | orchestrator | c190b7655afa registry.osism.tech/kolla/fluentd:2024.2 "dumb-init --single-…" 31 minutes ago Up 31 minutes fluentd 2026-04-05 01:20:23.628546 | orchestrator | 2026-04-05 01:20:23.628640 | orchestrator | ## Images @ testbed-node-2 2026-04-05 01:20:23.628657 | orchestrator | 2026-04-05 01:20:23.628669 | orchestrator | + echo 2026-04-05 01:20:23.628681 | orchestrator | + echo '## Images @ testbed-node-2' 2026-04-05 01:20:23.628693 | orchestrator | + echo 2026-04-05 01:20:23.628704 | orchestrator | + osism container testbed-node-2 images 2026-04-05 01:20:25.144998 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2026-04-05 01:20:25.145123 | orchestrator | registry.osism.tech/osism/ceph-daemon reef 9c619feb45c6 22 hours ago 1.35GB 2026-04-05 01:20:25.145151 | orchestrator | registry.osism.tech/kolla/fluentd 2024.2 ad0cbbde9181 23 hours ago 590MB 2026-04-05 01:20:25.145172 | orchestrator | registry.osism.tech/kolla/cron 2024.2 59337077c5b9 23 hours ago 277MB 2026-04-05 01:20:25.145193 | orchestrator | registry.osism.tech/kolla/kolla-toolbox 2024.2 46754da5c988 23 hours ago 679MB 2026-04-05 01:20:25.145214 | orchestrator | registry.osism.tech/kolla/opensearch-dashboards 2024.2 53cb5be99aef 23 hours ago 1.54GB 2026-04-05 01:20:25.145234 | orchestrator | registry.osism.tech/kolla/opensearch 2024.2 19d50baf2114 23 hours ago 1.57GB 2026-04-05 01:20:25.145249 | orchestrator | registry.osism.tech/kolla/keepalived 2024.2 2bc61ec54187 23 hours ago 287MB 2026-04-05 01:20:25.145269 | orchestrator | registry.osism.tech/kolla/grafana 2024.2 4430d8853777 23 hours ago 1.04GB 2026-04-05 01:20:25.145287 | orchestrator | registry.osism.tech/kolla/proxysql 2024.2 496dc5bf4ba8 23 hours ago 427MB 2026-04-05 01:20:25.145306 | orchestrator | registry.osism.tech/kolla/memcached 2024.2 cb2ddb547a10 23 hours ago 277MB 2026-04-05 01:20:25.145326 | orchestrator | registry.osism.tech/kolla/haproxy 2024.2 04ab7c751084 23 hours ago 285MB 2026-04-05 01:20:25.145370 | orchestrator | registry.osism.tech/kolla/rabbitmq 2024.2 562fa0663663 23 hours ago 333MB 2026-04-05 01:20:25.145384 | orchestrator | registry.osism.tech/kolla/prometheus-elasticsearch-exporter 2024.2 f8827e45274c 23 hours ago 303MB 2026-04-05 01:20:25.145395 | orchestrator | registry.osism.tech/kolla/prometheus-node-exporter 2024.2 032fd2da0341 23 hours ago 317MB 2026-04-05 01:20:25.145407 | orchestrator | registry.osism.tech/kolla/prometheus-cadvisor 2024.2 f141a5d170da 23 hours ago 368MB 2026-04-05 01:20:25.145418 | orchestrator | registry.osism.tech/kolla/prometheus-memcached-exporter 2024.2 c9a9bb739f53 23 hours ago 309MB 2026-04-05 01:20:25.145431 | orchestrator | registry.osism.tech/kolla/prometheus-mysqld-exporter 2024.2 a226e99a65d8 23 hours ago 312MB 2026-04-05 01:20:25.145444 | orchestrator | registry.osism.tech/kolla/openvswitch-vswitchd 2024.2 b75d3f0bc6cd 23 hours ago 290MB 2026-04-05 01:20:25.145478 | orchestrator | registry.osism.tech/kolla/openvswitch-db-server 2024.2 1c302eb74111 23 hours ago 290MB 2026-04-05 01:20:25.145491 | orchestrator | registry.osism.tech/kolla/redis-sentinel 2024.2 7090fab32431 23 hours ago 284MB 2026-04-05 01:20:25.145504 | orchestrator | registry.osism.tech/kolla/redis 2024.2 a70d86539888 23 hours ago 284MB 2026-04-05 01:20:25.145532 | orchestrator | registry.osism.tech/kolla/mariadb-server 2024.2 8962d87f9352 23 hours ago 463MB 2026-04-05 01:20:25.145546 | orchestrator | registry.osism.tech/kolla/horizon 2024.2 a1e75dff8bc4 23 hours ago 1.16GB 2026-04-05 01:20:25.145559 | orchestrator | registry.osism.tech/kolla/ovn-controller 2024.2 483d205bb699 23 hours ago 851MB 2026-04-05 01:20:25.145571 | orchestrator | registry.osism.tech/kolla/ovn-sb-db-server 2024.2 86469296f423 23 hours ago 851MB 2026-04-05 01:20:25.145583 | orchestrator | registry.osism.tech/kolla/ovn-northd 2024.2 44b3cb34da7a 23 hours ago 851MB 2026-04-05 01:20:25.145596 | orchestrator | registry.osism.tech/kolla/ovn-nb-db-server 2024.2 1d7ff124f857 23 hours ago 851MB 2026-04-05 01:20:25.145608 | orchestrator | registry.osism.tech/kolla/magnum-api 2024.2 97c98331201a 23 hours ago 1.14GB 2026-04-05 01:20:25.145619 | orchestrator | registry.osism.tech/kolla/magnum-conductor 2024.2 0afc71765fa4 23 hours ago 1.25GB 2026-04-05 01:20:25.145630 | orchestrator | registry.osism.tech/kolla/placement-api 2024.2 2d516175da16 23 hours ago 987MB 2026-04-05 01:20:25.145641 | orchestrator | registry.osism.tech/kolla/keystone 2024.2 0dfb512e46c5 23 hours ago 1.08GB 2026-04-05 01:20:25.145652 | orchestrator | registry.osism.tech/kolla/keystone-ssh 2024.2 63bdeb2ebc47 23 hours ago 1.05GB 2026-04-05 01:20:25.145663 | orchestrator | registry.osism.tech/kolla/keystone-fernet 2024.2 2921b2dd08f4 23 hours ago 1.05GB 2026-04-05 01:20:25.145674 | orchestrator | registry.osism.tech/kolla/designate-api 2024.2 cde3242db83b 23 hours ago 995MB 2026-04-05 01:20:25.145685 | orchestrator | registry.osism.tech/kolla/designate-mdns 2024.2 0a2d4cd0bd6b 23 hours ago 995MB 2026-04-05 01:20:25.145696 | orchestrator | registry.osism.tech/kolla/designate-central 2024.2 4acd83ad59ad 23 hours ago 994MB 2026-04-05 01:20:25.145729 | orchestrator | registry.osism.tech/kolla/designate-worker 2024.2 be23fc2fb920 23 hours ago 1e+03MB 2026-04-05 01:20:25.145741 | orchestrator | registry.osism.tech/kolla/designate-producer 2024.2 6fc5b5164791 23 hours ago 995MB 2026-04-05 01:20:25.145758 | orchestrator | registry.osism.tech/kolla/designate-backend-bind9 2024.2 c6423fce5b77 23 hours ago 1e+03MB 2026-04-05 01:20:25.145769 | orchestrator | registry.osism.tech/kolla/octavia-health-manager 2024.2 17bcdb15a481 23 hours ago 1.04GB 2026-04-05 01:20:25.145780 | orchestrator | registry.osism.tech/kolla/octavia-api 2024.2 8ca0b197bf6e 23 hours ago 1.06GB 2026-04-05 01:20:25.145791 | orchestrator | registry.osism.tech/kolla/octavia-housekeeping 2024.2 3a92408dba9d 23 hours ago 1.04GB 2026-04-05 01:20:25.145802 | orchestrator | registry.osism.tech/kolla/octavia-worker 2024.2 8ab782831c60 23 hours ago 1.04GB 2026-04-05 01:20:25.145842 | orchestrator | registry.osism.tech/kolla/octavia-driver-agent 2024.2 a6753f4ba9a2 23 hours ago 1.06GB 2026-04-05 01:20:25.145854 | orchestrator | registry.osism.tech/kolla/barbican-worker 2024.2 3dee3957357a 23 hours ago 1GB 2026-04-05 01:20:25.145865 | orchestrator | registry.osism.tech/kolla/barbican-keystone-listener 2024.2 47296cf89525 23 hours ago 1GB 2026-04-05 01:20:25.145875 | orchestrator | registry.osism.tech/kolla/barbican-api 2024.2 1d6d20a1d5ae 23 hours ago 1GB 2026-04-05 01:20:25.145896 | orchestrator | registry.osism.tech/kolla/nova-api 2024.2 b640fd3b5545 23 hours ago 1.22GB 2026-04-05 01:20:25.145907 | orchestrator | registry.osism.tech/kolla/nova-conductor 2024.2 0d09af86e625 23 hours ago 1.22GB 2026-04-05 01:20:25.145917 | orchestrator | registry.osism.tech/kolla/nova-novncproxy 2024.2 1748b47df461 23 hours ago 1.38GB 2026-04-05 01:20:25.145928 | orchestrator | registry.osism.tech/kolla/nova-scheduler 2024.2 af0b666d7896 23 hours ago 1.22GB 2026-04-05 01:20:25.145939 | orchestrator | registry.osism.tech/kolla/neutron-server 2024.2 0d757bb8f635 23 hours ago 1.17GB 2026-04-05 01:20:25.145950 | orchestrator | registry.osism.tech/kolla/glance-api 2024.2 ad339f2d0098 23 hours ago 1.11GB 2026-04-05 01:20:25.145960 | orchestrator | registry.osism.tech/kolla/cinder-backup 2024.2 4ff43697e718 23 hours ago 1.42GB 2026-04-05 01:20:25.145971 | orchestrator | registry.osism.tech/kolla/cinder-volume 2024.2 18e876527ba0 23 hours ago 1.73GB 2026-04-05 01:20:25.145982 | orchestrator | registry.osism.tech/kolla/cinder-api 2024.2 b7c066d53585 23 hours ago 1.42GB 2026-04-05 01:20:25.145993 | orchestrator | registry.osism.tech/kolla/cinder-scheduler 2024.2 77a22692e243 23 hours ago 1.42GB 2026-04-05 01:20:25.298353 | orchestrator | + sh -c /opt/configuration/scripts/check-services.sh 2026-04-05 01:20:25.303441 | orchestrator | + set -e 2026-04-05 01:20:25.303543 | orchestrator | + source /opt/manager-vars.sh 2026-04-05 01:20:25.304293 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-04-05 01:20:25.304329 | orchestrator | ++ NUMBER_OF_NODES=6 2026-04-05 01:20:25.304342 | orchestrator | ++ export CEPH_VERSION=reef 2026-04-05 01:20:25.304354 | orchestrator | ++ CEPH_VERSION=reef 2026-04-05 01:20:25.304366 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-04-05 01:20:25.304378 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-04-05 01:20:25.304389 | orchestrator | ++ export MANAGER_VERSION=latest 2026-04-05 01:20:25.304400 | orchestrator | ++ MANAGER_VERSION=latest 2026-04-05 01:20:25.304412 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-04-05 01:20:25.304422 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-04-05 01:20:25.304434 | orchestrator | ++ export ARA=false 2026-04-05 01:20:25.304445 | orchestrator | ++ ARA=false 2026-04-05 01:20:25.304455 | orchestrator | ++ export DEPLOY_MODE=manager 2026-04-05 01:20:25.304466 | orchestrator | ++ DEPLOY_MODE=manager 2026-04-05 01:20:25.304477 | orchestrator | ++ export TEMPEST=true 2026-04-05 01:20:25.304488 | orchestrator | ++ TEMPEST=true 2026-04-05 01:20:25.304499 | orchestrator | ++ export IS_ZUUL=true 2026-04-05 01:20:25.304510 | orchestrator | ++ IS_ZUUL=true 2026-04-05 01:20:25.304529 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.221 2026-04-05 01:20:25.304548 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.221 2026-04-05 01:20:25.304566 | orchestrator | ++ export EXTERNAL_API=false 2026-04-05 01:20:25.304586 | orchestrator | ++ EXTERNAL_API=false 2026-04-05 01:20:25.304605 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-04-05 01:20:25.304621 | orchestrator | ++ IMAGE_USER=ubuntu 2026-04-05 01:20:25.304639 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-04-05 01:20:25.304657 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-04-05 01:20:25.304674 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-04-05 01:20:25.304691 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-04-05 01:20:25.304707 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2026-04-05 01:20:25.304724 | orchestrator | + sh -c /opt/configuration/scripts/check/100-ceph-with-ansible.sh 2026-04-05 01:20:25.315389 | orchestrator | + set -e 2026-04-05 01:20:25.315455 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-04-05 01:20:25.315471 | orchestrator | ++ export INTERACTIVE=false 2026-04-05 01:20:25.315486 | orchestrator | ++ INTERACTIVE=false 2026-04-05 01:20:25.315497 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-04-05 01:20:25.315509 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-04-05 01:20:25.315805 | orchestrator | + source /opt/configuration/scripts/manager-version.sh 2026-04-05 01:20:25.316590 | orchestrator | +++ awk '-F: ' '/^manager_version:/ { print $2 }' /opt/configuration/environments/manager/configuration.yml 2026-04-05 01:20:25.319606 | orchestrator | 2026-04-05 01:20:25.319637 | orchestrator | # Ceph status 2026-04-05 01:20:25.319649 | orchestrator | 2026-04-05 01:20:25.319660 | orchestrator | ++ export MANAGER_VERSION=latest 2026-04-05 01:20:25.319671 | orchestrator | ++ MANAGER_VERSION=latest 2026-04-05 01:20:25.319683 | orchestrator | + echo 2026-04-05 01:20:25.319720 | orchestrator | + echo '# Ceph status' 2026-04-05 01:20:25.319731 | orchestrator | + echo 2026-04-05 01:20:25.319742 | orchestrator | + ceph -s 2026-04-05 01:20:25.930429 | orchestrator | cluster: 2026-04-05 01:20:25.930519 | orchestrator | id: 11111111-1111-1111-1111-111111111111 2026-04-05 01:20:25.930533 | orchestrator | health: HEALTH_OK 2026-04-05 01:20:25.930544 | orchestrator | 2026-04-05 01:20:25.930554 | orchestrator | services: 2026-04-05 01:20:25.930564 | orchestrator | mon: 3 daemons, quorum testbed-node-0,testbed-node-1,testbed-node-2 (age 28m) 2026-04-05 01:20:25.930574 | orchestrator | mgr: testbed-node-2(active, since 17m), standbys: testbed-node-1, testbed-node-0 2026-04-05 01:20:25.930584 | orchestrator | mds: 1/1 daemons up, 2 standby 2026-04-05 01:20:25.930593 | orchestrator | osd: 6 osds: 6 up (since 25m), 6 in (since 25m) 2026-04-05 01:20:25.930603 | orchestrator | rgw: 3 daemons active (3 hosts, 1 zones) 2026-04-05 01:20:25.930612 | orchestrator | 2026-04-05 01:20:25.930621 | orchestrator | data: 2026-04-05 01:20:25.930630 | orchestrator | volumes: 1/1 healthy 2026-04-05 01:20:25.930639 | orchestrator | pools: 14 pools, 401 pgs 2026-04-05 01:20:25.930648 | orchestrator | objects: 556 objects, 2.2 GiB 2026-04-05 01:20:25.930657 | orchestrator | usage: 7.1 GiB used, 113 GiB / 120 GiB avail 2026-04-05 01:20:25.930666 | orchestrator | pgs: 401 active+clean 2026-04-05 01:20:25.930675 | orchestrator | 2026-04-05 01:20:25.977991 | orchestrator | 2026-04-05 01:20:25.978163 | orchestrator | # Ceph versions 2026-04-05 01:20:25.978179 | orchestrator | 2026-04-05 01:20:25.978191 | orchestrator | + echo 2026-04-05 01:20:25.978203 | orchestrator | + echo '# Ceph versions' 2026-04-05 01:20:25.978215 | orchestrator | + echo 2026-04-05 01:20:25.978226 | orchestrator | + ceph versions 2026-04-05 01:20:26.657562 | orchestrator | { 2026-04-05 01:20:26.657678 | orchestrator | "mon": { 2026-04-05 01:20:26.657702 | orchestrator | "ceph version 18.2.8 (efac5a54607c13fa50d4822e50242b86e6e446df) reef (stable)": 3 2026-04-05 01:20:26.657720 | orchestrator | }, 2026-04-05 01:20:26.657736 | orchestrator | "mgr": { 2026-04-05 01:20:26.657751 | orchestrator | "ceph version 18.2.8 (efac5a54607c13fa50d4822e50242b86e6e446df) reef (stable)": 3 2026-04-05 01:20:26.657767 | orchestrator | }, 2026-04-05 01:20:26.657782 | orchestrator | "osd": { 2026-04-05 01:20:26.657795 | orchestrator | "ceph version 18.2.8 (efac5a54607c13fa50d4822e50242b86e6e446df) reef (stable)": 6 2026-04-05 01:20:26.657808 | orchestrator | }, 2026-04-05 01:20:26.657853 | orchestrator | "mds": { 2026-04-05 01:20:26.657869 | orchestrator | "ceph version 18.2.8 (efac5a54607c13fa50d4822e50242b86e6e446df) reef (stable)": 3 2026-04-05 01:20:26.657884 | orchestrator | }, 2026-04-05 01:20:26.657898 | orchestrator | "rgw": { 2026-04-05 01:20:26.657939 | orchestrator | "ceph version 18.2.8 (efac5a54607c13fa50d4822e50242b86e6e446df) reef (stable)": 3 2026-04-05 01:20:26.657961 | orchestrator | }, 2026-04-05 01:20:26.657975 | orchestrator | "overall": { 2026-04-05 01:20:26.657991 | orchestrator | "ceph version 18.2.8 (efac5a54607c13fa50d4822e50242b86e6e446df) reef (stable)": 18 2026-04-05 01:20:26.658006 | orchestrator | } 2026-04-05 01:20:26.658085 | orchestrator | } 2026-04-05 01:20:26.716844 | orchestrator | 2026-04-05 01:20:26.716937 | orchestrator | # Ceph OSD tree 2026-04-05 01:20:26.716975 | orchestrator | 2026-04-05 01:20:26.716987 | orchestrator | + echo 2026-04-05 01:20:26.716997 | orchestrator | + echo '# Ceph OSD tree' 2026-04-05 01:20:26.717008 | orchestrator | + echo 2026-04-05 01:20:26.717018 | orchestrator | + ceph osd df tree 2026-04-05 01:20:27.249577 | orchestrator | ID CLASS WEIGHT REWEIGHT SIZE RAW USE DATA OMAP META AVAIL %USE VAR PGS STATUS TYPE NAME 2026-04-05 01:20:27.249692 | orchestrator | -1 0.11691 - 120 GiB 7.1 GiB 6.7 GiB 6 KiB 412 MiB 113 GiB 5.90 1.00 - root default 2026-04-05 01:20:27.249708 | orchestrator | -5 0.03897 - 40 GiB 2.4 GiB 2.2 GiB 2 KiB 143 MiB 38 GiB 5.91 1.00 - host testbed-node-3 2026-04-05 01:20:27.249720 | orchestrator | 0 hdd 0.01949 1.00000 20 GiB 1.2 GiB 1.1 GiB 1 KiB 74 MiB 19 GiB 5.77 0.98 190 up osd.0 2026-04-05 01:20:27.249731 | orchestrator | 4 hdd 0.01949 1.00000 20 GiB 1.2 GiB 1.1 GiB 1 KiB 70 MiB 19 GiB 6.06 1.03 202 up osd.4 2026-04-05 01:20:27.249742 | orchestrator | -3 0.03897 - 40 GiB 2.3 GiB 2.2 GiB 2 KiB 126 MiB 38 GiB 5.87 0.99 - host testbed-node-4 2026-04-05 01:20:27.249753 | orchestrator | 1 hdd 0.01949 1.00000 20 GiB 1.1 GiB 1.1 GiB 1 KiB 52 MiB 19 GiB 5.74 0.97 209 up osd.1 2026-04-05 01:20:27.249789 | orchestrator | 3 hdd 0.01949 1.00000 20 GiB 1.2 GiB 1.1 GiB 1 KiB 74 MiB 19 GiB 6.00 1.02 181 up osd.3 2026-04-05 01:20:27.249801 | orchestrator | -7 0.03897 - 40 GiB 2.4 GiB 2.2 GiB 2 KiB 143 MiB 38 GiB 5.91 1.00 - host testbed-node-5 2026-04-05 01:20:27.249886 | orchestrator | 2 hdd 0.01949 1.00000 20 GiB 1.3 GiB 1.2 GiB 1 KiB 70 MiB 19 GiB 6.57 1.11 191 up osd.2 2026-04-05 01:20:27.249901 | orchestrator | 5 hdd 0.01949 1.00000 20 GiB 1.1 GiB 1003 MiB 1 KiB 74 MiB 19 GiB 5.26 0.89 197 up osd.5 2026-04-05 01:20:27.249912 | orchestrator | TOTAL 120 GiB 7.1 GiB 6.7 GiB 9.3 KiB 412 MiB 113 GiB 5.90 2026-04-05 01:20:27.249924 | orchestrator | MIN/MAX VAR: 0.89/1.11 STDDEV: 0.39 2026-04-05 01:20:27.293180 | orchestrator | 2026-04-05 01:20:27.293274 | orchestrator | # Ceph monitor status 2026-04-05 01:20:27.293289 | orchestrator | 2026-04-05 01:20:27.293301 | orchestrator | + echo 2026-04-05 01:20:27.293313 | orchestrator | + echo '# Ceph monitor status' 2026-04-05 01:20:27.293325 | orchestrator | + echo 2026-04-05 01:20:27.293337 | orchestrator | + ceph mon stat 2026-04-05 01:20:27.882560 | orchestrator | e1: 3 mons at {testbed-node-0=[v2:192.168.16.10:3300/0,v1:192.168.16.10:6789/0],testbed-node-1=[v2:192.168.16.11:3300/0,v1:192.168.16.11:6789/0],testbed-node-2=[v2:192.168.16.12:3300/0,v1:192.168.16.12:6789/0]} removed_ranks: {} disallowed_leaders: {}, election epoch 10, leader 0 testbed-node-0, quorum 0,1,2 testbed-node-0,testbed-node-1,testbed-node-2 2026-04-05 01:20:27.936234 | orchestrator | 2026-04-05 01:20:27.936334 | orchestrator | # Ceph quorum status 2026-04-05 01:20:27.936357 | orchestrator | 2026-04-05 01:20:27.936369 | orchestrator | + echo 2026-04-05 01:20:27.936381 | orchestrator | + echo '# Ceph quorum status' 2026-04-05 01:20:27.936393 | orchestrator | + echo 2026-04-05 01:20:27.936759 | orchestrator | + ceph quorum_status 2026-04-05 01:20:27.936880 | orchestrator | + jq 2026-04-05 01:20:28.584408 | orchestrator | { 2026-04-05 01:20:28.585315 | orchestrator | "election_epoch": 10, 2026-04-05 01:20:28.585354 | orchestrator | "quorum": [ 2026-04-05 01:20:28.585363 | orchestrator | 0, 2026-04-05 01:20:28.585371 | orchestrator | 1, 2026-04-05 01:20:28.585379 | orchestrator | 2 2026-04-05 01:20:28.585387 | orchestrator | ], 2026-04-05 01:20:28.585395 | orchestrator | "quorum_names": [ 2026-04-05 01:20:28.585403 | orchestrator | "testbed-node-0", 2026-04-05 01:20:28.585411 | orchestrator | "testbed-node-1", 2026-04-05 01:20:28.585418 | orchestrator | "testbed-node-2" 2026-04-05 01:20:28.585426 | orchestrator | ], 2026-04-05 01:20:28.585437 | orchestrator | "quorum_leader_name": "testbed-node-0", 2026-04-05 01:20:28.585445 | orchestrator | "quorum_age": 1691, 2026-04-05 01:20:28.585453 | orchestrator | "features": { 2026-04-05 01:20:28.585461 | orchestrator | "quorum_con": "4540138322906710015", 2026-04-05 01:20:28.585472 | orchestrator | "quorum_mon": [ 2026-04-05 01:20:28.585479 | orchestrator | "kraken", 2026-04-05 01:20:28.585486 | orchestrator | "luminous", 2026-04-05 01:20:28.585493 | orchestrator | "mimic", 2026-04-05 01:20:28.585503 | orchestrator | "osdmap-prune", 2026-04-05 01:20:28.585510 | orchestrator | "nautilus", 2026-04-05 01:20:28.585518 | orchestrator | "octopus", 2026-04-05 01:20:28.585526 | orchestrator | "pacific", 2026-04-05 01:20:28.585534 | orchestrator | "elector-pinging", 2026-04-05 01:20:28.585542 | orchestrator | "quincy", 2026-04-05 01:20:28.585549 | orchestrator | "reef" 2026-04-05 01:20:28.585556 | orchestrator | ] 2026-04-05 01:20:28.585564 | orchestrator | }, 2026-04-05 01:20:28.585571 | orchestrator | "monmap": { 2026-04-05 01:20:28.585579 | orchestrator | "epoch": 1, 2026-04-05 01:20:28.585587 | orchestrator | "fsid": "11111111-1111-1111-1111-111111111111", 2026-04-05 01:20:28.585596 | orchestrator | "modified": "2026-04-05T00:51:47.819405Z", 2026-04-05 01:20:28.585605 | orchestrator | "created": "2026-04-05T00:51:47.819405Z", 2026-04-05 01:20:28.585613 | orchestrator | "min_mon_release": 18, 2026-04-05 01:20:28.585622 | orchestrator | "min_mon_release_name": "reef", 2026-04-05 01:20:28.585629 | orchestrator | "election_strategy": 1, 2026-04-05 01:20:28.585637 | orchestrator | "disallowed_leaders": "", 2026-04-05 01:20:28.585645 | orchestrator | "stretch_mode": false, 2026-04-05 01:20:28.585655 | orchestrator | "tiebreaker_mon": "", 2026-04-05 01:20:28.585663 | orchestrator | "removed_ranks": "", 2026-04-05 01:20:28.585670 | orchestrator | "features": { 2026-04-05 01:20:28.585678 | orchestrator | "persistent": [ 2026-04-05 01:20:28.585707 | orchestrator | "kraken", 2026-04-05 01:20:28.585717 | orchestrator | "luminous", 2026-04-05 01:20:28.585724 | orchestrator | "mimic", 2026-04-05 01:20:28.585732 | orchestrator | "osdmap-prune", 2026-04-05 01:20:28.585738 | orchestrator | "nautilus", 2026-04-05 01:20:28.585744 | orchestrator | "octopus", 2026-04-05 01:20:28.585750 | orchestrator | "pacific", 2026-04-05 01:20:28.585756 | orchestrator | "elector-pinging", 2026-04-05 01:20:28.585763 | orchestrator | "quincy", 2026-04-05 01:20:28.585770 | orchestrator | "reef" 2026-04-05 01:20:28.585776 | orchestrator | ], 2026-04-05 01:20:28.585783 | orchestrator | "optional": [] 2026-04-05 01:20:28.585790 | orchestrator | }, 2026-04-05 01:20:28.585797 | orchestrator | "mons": [ 2026-04-05 01:20:28.585804 | orchestrator | { 2026-04-05 01:20:28.585829 | orchestrator | "rank": 0, 2026-04-05 01:20:28.585850 | orchestrator | "name": "testbed-node-0", 2026-04-05 01:20:28.585858 | orchestrator | "public_addrs": { 2026-04-05 01:20:28.585864 | orchestrator | "addrvec": [ 2026-04-05 01:20:28.585871 | orchestrator | { 2026-04-05 01:20:28.585877 | orchestrator | "type": "v2", 2026-04-05 01:20:28.585884 | orchestrator | "addr": "192.168.16.10:3300", 2026-04-05 01:20:28.585890 | orchestrator | "nonce": 0 2026-04-05 01:20:28.585897 | orchestrator | }, 2026-04-05 01:20:28.585903 | orchestrator | { 2026-04-05 01:20:28.585910 | orchestrator | "type": "v1", 2026-04-05 01:20:28.585916 | orchestrator | "addr": "192.168.16.10:6789", 2026-04-05 01:20:28.585923 | orchestrator | "nonce": 0 2026-04-05 01:20:28.585929 | orchestrator | } 2026-04-05 01:20:28.585936 | orchestrator | ] 2026-04-05 01:20:28.585943 | orchestrator | }, 2026-04-05 01:20:28.585950 | orchestrator | "addr": "192.168.16.10:6789/0", 2026-04-05 01:20:28.585957 | orchestrator | "public_addr": "192.168.16.10:6789/0", 2026-04-05 01:20:28.585963 | orchestrator | "priority": 0, 2026-04-05 01:20:28.585970 | orchestrator | "weight": 0, 2026-04-05 01:20:28.585977 | orchestrator | "crush_location": "{}" 2026-04-05 01:20:28.585983 | orchestrator | }, 2026-04-05 01:20:28.585990 | orchestrator | { 2026-04-05 01:20:28.585997 | orchestrator | "rank": 1, 2026-04-05 01:20:28.586003 | orchestrator | "name": "testbed-node-1", 2026-04-05 01:20:28.586010 | orchestrator | "public_addrs": { 2026-04-05 01:20:28.586058 | orchestrator | "addrvec": [ 2026-04-05 01:20:28.586066 | orchestrator | { 2026-04-05 01:20:28.586073 | orchestrator | "type": "v2", 2026-04-05 01:20:28.586080 | orchestrator | "addr": "192.168.16.11:3300", 2026-04-05 01:20:28.586087 | orchestrator | "nonce": 0 2026-04-05 01:20:28.586093 | orchestrator | }, 2026-04-05 01:20:28.586100 | orchestrator | { 2026-04-05 01:20:28.586107 | orchestrator | "type": "v1", 2026-04-05 01:20:28.586114 | orchestrator | "addr": "192.168.16.11:6789", 2026-04-05 01:20:28.586120 | orchestrator | "nonce": 0 2026-04-05 01:20:28.586127 | orchestrator | } 2026-04-05 01:20:28.586134 | orchestrator | ] 2026-04-05 01:20:28.586140 | orchestrator | }, 2026-04-05 01:20:28.586147 | orchestrator | "addr": "192.168.16.11:6789/0", 2026-04-05 01:20:28.586153 | orchestrator | "public_addr": "192.168.16.11:6789/0", 2026-04-05 01:20:28.586160 | orchestrator | "priority": 0, 2026-04-05 01:20:28.586167 | orchestrator | "weight": 0, 2026-04-05 01:20:28.586174 | orchestrator | "crush_location": "{}" 2026-04-05 01:20:28.586181 | orchestrator | }, 2026-04-05 01:20:28.586188 | orchestrator | { 2026-04-05 01:20:28.586194 | orchestrator | "rank": 2, 2026-04-05 01:20:28.586201 | orchestrator | "name": "testbed-node-2", 2026-04-05 01:20:28.586208 | orchestrator | "public_addrs": { 2026-04-05 01:20:28.586215 | orchestrator | "addrvec": [ 2026-04-05 01:20:28.586222 | orchestrator | { 2026-04-05 01:20:28.586228 | orchestrator | "type": "v2", 2026-04-05 01:20:28.586235 | orchestrator | "addr": "192.168.16.12:3300", 2026-04-05 01:20:28.586242 | orchestrator | "nonce": 0 2026-04-05 01:20:28.586248 | orchestrator | }, 2026-04-05 01:20:28.586255 | orchestrator | { 2026-04-05 01:20:28.586261 | orchestrator | "type": "v1", 2026-04-05 01:20:28.586268 | orchestrator | "addr": "192.168.16.12:6789", 2026-04-05 01:20:28.586274 | orchestrator | "nonce": 0 2026-04-05 01:20:28.586281 | orchestrator | } 2026-04-05 01:20:28.586288 | orchestrator | ] 2026-04-05 01:20:28.586295 | orchestrator | }, 2026-04-05 01:20:28.586302 | orchestrator | "addr": "192.168.16.12:6789/0", 2026-04-05 01:20:28.586309 | orchestrator | "public_addr": "192.168.16.12:6789/0", 2026-04-05 01:20:28.586325 | orchestrator | "priority": 0, 2026-04-05 01:20:28.586333 | orchestrator | "weight": 0, 2026-04-05 01:20:28.586339 | orchestrator | "crush_location": "{}" 2026-04-05 01:20:28.586346 | orchestrator | } 2026-04-05 01:20:28.586353 | orchestrator | ] 2026-04-05 01:20:28.586360 | orchestrator | } 2026-04-05 01:20:28.586367 | orchestrator | } 2026-04-05 01:20:28.586389 | orchestrator | 2026-04-05 01:20:28.586397 | orchestrator | # Ceph free space status 2026-04-05 01:20:28.586404 | orchestrator | 2026-04-05 01:20:28.586410 | orchestrator | + echo 2026-04-05 01:20:28.586417 | orchestrator | + echo '# Ceph free space status' 2026-04-05 01:20:28.586424 | orchestrator | + echo 2026-04-05 01:20:28.586431 | orchestrator | + ceph df 2026-04-05 01:20:29.196484 | orchestrator | --- RAW STORAGE --- 2026-04-05 01:20:29.196591 | orchestrator | CLASS SIZE AVAIL USED RAW USED %RAW USED 2026-04-05 01:20:29.196619 | orchestrator | hdd 120 GiB 113 GiB 7.1 GiB 7.1 GiB 5.90 2026-04-05 01:20:29.196632 | orchestrator | TOTAL 120 GiB 113 GiB 7.1 GiB 7.1 GiB 5.90 2026-04-05 01:20:29.196643 | orchestrator | 2026-04-05 01:20:29.196654 | orchestrator | --- POOLS --- 2026-04-05 01:20:29.196666 | orchestrator | POOL ID PGS STORED OBJECTS USED %USED MAX AVAIL 2026-04-05 01:20:29.196678 | orchestrator | .mgr 1 1 577 KiB 2 1.1 MiB 0 53 GiB 2026-04-05 01:20:29.196690 | orchestrator | cephfs_data 2 32 0 B 0 0 B 0 35 GiB 2026-04-05 01:20:29.196700 | orchestrator | cephfs_metadata 3 16 4.4 KiB 22 96 KiB 0 35 GiB 2026-04-05 01:20:29.196711 | orchestrator | default.rgw.buckets.data 4 32 0 B 0 0 B 0 35 GiB 2026-04-05 01:20:29.196722 | orchestrator | default.rgw.buckets.index 5 32 0 B 0 0 B 0 35 GiB 2026-04-05 01:20:29.196733 | orchestrator | default.rgw.control 6 32 0 B 8 0 B 0 35 GiB 2026-04-05 01:20:29.196744 | orchestrator | default.rgw.log 7 32 3.6 KiB 209 408 KiB 0 35 GiB 2026-04-05 01:20:29.196755 | orchestrator | default.rgw.meta 8 32 0 B 0 0 B 0 35 GiB 2026-04-05 01:20:29.196765 | orchestrator | .rgw.root 9 32 3.9 KiB 8 64 KiB 0 53 GiB 2026-04-05 01:20:29.196776 | orchestrator | backups 10 32 19 B 2 12 KiB 0 35 GiB 2026-04-05 01:20:29.196787 | orchestrator | volumes 11 32 19 B 2 12 KiB 0 35 GiB 2026-04-05 01:20:29.196867 | orchestrator | images 12 32 2.2 GiB 299 6.7 GiB 5.91 35 GiB 2026-04-05 01:20:29.196880 | orchestrator | metrics 13 32 19 B 2 12 KiB 0 35 GiB 2026-04-05 01:20:29.196892 | orchestrator | vms 14 32 19 B 2 12 KiB 0 35 GiB 2026-04-05 01:20:29.240798 | orchestrator | ++ semver latest 5.0.0 2026-04-05 01:20:29.290790 | orchestrator | + [[ -1 -eq -1 ]] 2026-04-05 01:20:29.290928 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2026-04-05 01:20:29.290945 | orchestrator | + osism apply facts 2026-04-05 01:20:30.704553 | orchestrator | 2026-04-05 01:20:30 | INFO  | Prepare task for execution of facts. 2026-04-05 01:20:30.790585 | orchestrator | 2026-04-05 01:20:30 | INFO  | Task ea0ec4ff-4b3c-419a-9b9b-c26146cb33e7 (facts) was prepared for execution. 2026-04-05 01:20:30.790681 | orchestrator | 2026-04-05 01:20:30 | INFO  | It takes a moment until task ea0ec4ff-4b3c-419a-9b9b-c26146cb33e7 (facts) has been started and output is visible here. 2026-04-05 01:20:43.649479 | orchestrator | 2026-04-05 01:20:43.649582 | orchestrator | PLAY [Apply role facts] ******************************************************** 2026-04-05 01:20:43.649599 | orchestrator | 2026-04-05 01:20:43.649611 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2026-04-05 01:20:43.649623 | orchestrator | Sunday 05 April 2026 01:20:34 +0000 (0:00:00.378) 0:00:00.378 ********** 2026-04-05 01:20:43.649637 | orchestrator | ok: [testbed-manager] 2026-04-05 01:20:43.649658 | orchestrator | ok: [testbed-node-1] 2026-04-05 01:20:43.649677 | orchestrator | ok: [testbed-node-0] 2026-04-05 01:20:43.649695 | orchestrator | ok: [testbed-node-2] 2026-04-05 01:20:43.649713 | orchestrator | ok: [testbed-node-3] 2026-04-05 01:20:43.649730 | orchestrator | ok: [testbed-node-4] 2026-04-05 01:20:43.649783 | orchestrator | ok: [testbed-node-5] 2026-04-05 01:20:43.649802 | orchestrator | 2026-04-05 01:20:43.649864 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2026-04-05 01:20:43.649885 | orchestrator | Sunday 05 April 2026 01:20:35 +0000 (0:00:01.370) 0:00:01.748 ********** 2026-04-05 01:20:43.649905 | orchestrator | skipping: [testbed-manager] 2026-04-05 01:20:43.649924 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:20:43.649942 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:20:43.649959 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:20:43.649970 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:20:43.649981 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:20:43.649991 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:20:43.650002 | orchestrator | 2026-04-05 01:20:43.650080 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-04-05 01:20:43.650096 | orchestrator | 2026-04-05 01:20:43.650109 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-04-05 01:20:43.650122 | orchestrator | Sunday 05 April 2026 01:20:37 +0000 (0:00:01.392) 0:00:03.141 ********** 2026-04-05 01:20:43.650135 | orchestrator | ok: [testbed-node-2] 2026-04-05 01:20:43.650148 | orchestrator | ok: [testbed-node-1] 2026-04-05 01:20:43.650179 | orchestrator | ok: [testbed-node-0] 2026-04-05 01:20:43.650192 | orchestrator | ok: [testbed-manager] 2026-04-05 01:20:43.650204 | orchestrator | ok: [testbed-node-3] 2026-04-05 01:20:43.650217 | orchestrator | ok: [testbed-node-5] 2026-04-05 01:20:43.650229 | orchestrator | ok: [testbed-node-4] 2026-04-05 01:20:43.650242 | orchestrator | 2026-04-05 01:20:43.650255 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2026-04-05 01:20:43.650268 | orchestrator | 2026-04-05 01:20:43.650281 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2026-04-05 01:20:43.650294 | orchestrator | Sunday 05 April 2026 01:20:42 +0000 (0:00:05.372) 0:00:08.513 ********** 2026-04-05 01:20:43.650308 | orchestrator | skipping: [testbed-manager] 2026-04-05 01:20:43.650321 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:20:43.650333 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:20:43.650345 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:20:43.650358 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:20:43.650370 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:20:43.650381 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:20:43.650392 | orchestrator | 2026-04-05 01:20:43.650403 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-05 01:20:43.650415 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-05 01:20:43.650436 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-05 01:20:43.650456 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-05 01:20:43.650475 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-05 01:20:43.650493 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-05 01:20:43.650521 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-05 01:20:43.650542 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-05 01:20:43.650559 | orchestrator | 2026-04-05 01:20:43.650578 | orchestrator | 2026-04-05 01:20:43.650595 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-05 01:20:43.650614 | orchestrator | Sunday 05 April 2026 01:20:43 +0000 (0:00:00.764) 0:00:09.278 ********** 2026-04-05 01:20:43.650647 | orchestrator | =============================================================================== 2026-04-05 01:20:43.650665 | orchestrator | Gathers facts about hosts ----------------------------------------------- 5.37s 2026-04-05 01:20:43.650685 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.39s 2026-04-05 01:20:43.650705 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.37s 2026-04-05 01:20:43.650724 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.76s 2026-04-05 01:20:43.855253 | orchestrator | + osism validate ceph-mons 2026-04-05 01:21:16.108577 | orchestrator | 2026-04-05 01:21:16.108663 | orchestrator | PLAY [Ceph validate mons] ****************************************************** 2026-04-05 01:21:16.108673 | orchestrator | 2026-04-05 01:21:16.108681 | orchestrator | TASK [Get timestamp for report file] ******************************************* 2026-04-05 01:21:16.108689 | orchestrator | Sunday 05 April 2026 01:20:59 +0000 (0:00:00.593) 0:00:00.593 ********** 2026-04-05 01:21:16.108697 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-04-05 01:21:16.108704 | orchestrator | 2026-04-05 01:21:16.108711 | orchestrator | TASK [Create report output directory] ****************************************** 2026-04-05 01:21:16.108717 | orchestrator | Sunday 05 April 2026 01:21:00 +0000 (0:00:01.030) 0:00:01.623 ********** 2026-04-05 01:21:16.108725 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-04-05 01:21:16.108732 | orchestrator | 2026-04-05 01:21:16.108738 | orchestrator | TASK [Define report vars] ****************************************************** 2026-04-05 01:21:16.108745 | orchestrator | Sunday 05 April 2026 01:21:01 +0000 (0:00:00.827) 0:00:02.451 ********** 2026-04-05 01:21:16.108752 | orchestrator | ok: [testbed-node-0] 2026-04-05 01:21:16.108759 | orchestrator | 2026-04-05 01:21:16.108766 | orchestrator | TASK [Prepare test data for container existance test] ************************** 2026-04-05 01:21:16.108773 | orchestrator | Sunday 05 April 2026 01:21:01 +0000 (0:00:00.129) 0:00:02.580 ********** 2026-04-05 01:21:16.108779 | orchestrator | ok: [testbed-node-0] 2026-04-05 01:21:16.108786 | orchestrator | ok: [testbed-node-1] 2026-04-05 01:21:16.108805 | orchestrator | ok: [testbed-node-2] 2026-04-05 01:21:16.108812 | orchestrator | 2026-04-05 01:21:16.108819 | orchestrator | TASK [Get container info] ****************************************************** 2026-04-05 01:21:16.108826 | orchestrator | Sunday 05 April 2026 01:21:01 +0000 (0:00:00.300) 0:00:02.880 ********** 2026-04-05 01:21:16.108832 | orchestrator | ok: [testbed-node-1] 2026-04-05 01:21:16.108891 | orchestrator | ok: [testbed-node-2] 2026-04-05 01:21:16.108904 | orchestrator | ok: [testbed-node-0] 2026-04-05 01:21:16.108915 | orchestrator | 2026-04-05 01:21:16.108927 | orchestrator | TASK [Set test result to failed if container is missing] *********************** 2026-04-05 01:21:16.108934 | orchestrator | Sunday 05 April 2026 01:21:03 +0000 (0:00:01.588) 0:00:04.469 ********** 2026-04-05 01:21:16.108941 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:21:16.108948 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:21:16.108955 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:21:16.108962 | orchestrator | 2026-04-05 01:21:16.108969 | orchestrator | TASK [Set test result to passed if container is existing] ********************** 2026-04-05 01:21:16.108975 | orchestrator | Sunday 05 April 2026 01:21:03 +0000 (0:00:00.292) 0:00:04.762 ********** 2026-04-05 01:21:16.108982 | orchestrator | ok: [testbed-node-0] 2026-04-05 01:21:16.108989 | orchestrator | ok: [testbed-node-1] 2026-04-05 01:21:16.108996 | orchestrator | ok: [testbed-node-2] 2026-04-05 01:21:16.109002 | orchestrator | 2026-04-05 01:21:16.109009 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-04-05 01:21:16.109016 | orchestrator | Sunday 05 April 2026 01:21:03 +0000 (0:00:00.358) 0:00:05.121 ********** 2026-04-05 01:21:16.109023 | orchestrator | ok: [testbed-node-0] 2026-04-05 01:21:16.109030 | orchestrator | ok: [testbed-node-1] 2026-04-05 01:21:16.109036 | orchestrator | ok: [testbed-node-2] 2026-04-05 01:21:16.109043 | orchestrator | 2026-04-05 01:21:16.109050 | orchestrator | TASK [Set test result to failed if ceph-mon is not running] ******************** 2026-04-05 01:21:16.109075 | orchestrator | Sunday 05 April 2026 01:21:04 +0000 (0:00:00.309) 0:00:05.430 ********** 2026-04-05 01:21:16.109083 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:21:16.109089 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:21:16.109096 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:21:16.109103 | orchestrator | 2026-04-05 01:21:16.109109 | orchestrator | TASK [Set test result to passed if ceph-mon is running] ************************ 2026-04-05 01:21:16.109116 | orchestrator | Sunday 05 April 2026 01:21:04 +0000 (0:00:00.483) 0:00:05.914 ********** 2026-04-05 01:21:16.109122 | orchestrator | ok: [testbed-node-0] 2026-04-05 01:21:16.109130 | orchestrator | ok: [testbed-node-1] 2026-04-05 01:21:16.109138 | orchestrator | ok: [testbed-node-2] 2026-04-05 01:21:16.109145 | orchestrator | 2026-04-05 01:21:16.109154 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2026-04-05 01:21:16.109161 | orchestrator | Sunday 05 April 2026 01:21:04 +0000 (0:00:00.313) 0:00:06.227 ********** 2026-04-05 01:21:16.109169 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:21:16.109176 | orchestrator | 2026-04-05 01:21:16.109184 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2026-04-05 01:21:16.109192 | orchestrator | Sunday 05 April 2026 01:21:05 +0000 (0:00:00.258) 0:00:06.486 ********** 2026-04-05 01:21:16.109200 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:21:16.109207 | orchestrator | 2026-04-05 01:21:16.109216 | orchestrator | TASK [Aggregate test results step three] *************************************** 2026-04-05 01:21:16.109224 | orchestrator | Sunday 05 April 2026 01:21:05 +0000 (0:00:00.265) 0:00:06.751 ********** 2026-04-05 01:21:16.109232 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:21:16.109240 | orchestrator | 2026-04-05 01:21:16.109246 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-04-05 01:21:16.109253 | orchestrator | Sunday 05 April 2026 01:21:05 +0000 (0:00:00.285) 0:00:07.036 ********** 2026-04-05 01:21:16.109260 | orchestrator | 2026-04-05 01:21:16.109266 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-04-05 01:21:16.109273 | orchestrator | Sunday 05 April 2026 01:21:05 +0000 (0:00:00.086) 0:00:07.123 ********** 2026-04-05 01:21:16.109279 | orchestrator | 2026-04-05 01:21:16.109286 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-04-05 01:21:16.109293 | orchestrator | Sunday 05 April 2026 01:21:05 +0000 (0:00:00.071) 0:00:07.194 ********** 2026-04-05 01:21:16.109299 | orchestrator | 2026-04-05 01:21:16.109306 | orchestrator | TASK [Print report file information] ******************************************* 2026-04-05 01:21:16.109313 | orchestrator | Sunday 05 April 2026 01:21:06 +0000 (0:00:00.287) 0:00:07.482 ********** 2026-04-05 01:21:16.109319 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:21:16.109326 | orchestrator | 2026-04-05 01:21:16.109332 | orchestrator | TASK [Fail due to missing containers] ****************************************** 2026-04-05 01:21:16.109339 | orchestrator | Sunday 05 April 2026 01:21:06 +0000 (0:00:00.280) 0:00:07.762 ********** 2026-04-05 01:21:16.109346 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:21:16.109352 | orchestrator | 2026-04-05 01:21:16.109371 | orchestrator | TASK [Prepare quorum test vars] ************************************************ 2026-04-05 01:21:16.109378 | orchestrator | Sunday 05 April 2026 01:21:06 +0000 (0:00:00.258) 0:00:08.021 ********** 2026-04-05 01:21:16.109385 | orchestrator | ok: [testbed-node-0] 2026-04-05 01:21:16.109391 | orchestrator | 2026-04-05 01:21:16.109398 | orchestrator | TASK [Get monmap info from one mon container] ********************************** 2026-04-05 01:21:16.109405 | orchestrator | Sunday 05 April 2026 01:21:06 +0000 (0:00:00.126) 0:00:08.147 ********** 2026-04-05 01:21:16.109411 | orchestrator | changed: [testbed-node-0] 2026-04-05 01:21:16.109418 | orchestrator | 2026-04-05 01:21:16.109424 | orchestrator | TASK [Set quorum test data] **************************************************** 2026-04-05 01:21:16.109431 | orchestrator | Sunday 05 April 2026 01:21:08 +0000 (0:00:01.708) 0:00:09.856 ********** 2026-04-05 01:21:16.109438 | orchestrator | ok: [testbed-node-0] 2026-04-05 01:21:16.109444 | orchestrator | 2026-04-05 01:21:16.109451 | orchestrator | TASK [Fail quorum test if not all monitors are in quorum] ********************** 2026-04-05 01:21:16.109462 | orchestrator | Sunday 05 April 2026 01:21:08 +0000 (0:00:00.321) 0:00:10.178 ********** 2026-04-05 01:21:16.109469 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:21:16.109476 | orchestrator | 2026-04-05 01:21:16.109482 | orchestrator | TASK [Pass quorum test if all monitors are in quorum] ************************** 2026-04-05 01:21:16.109489 | orchestrator | Sunday 05 April 2026 01:21:08 +0000 (0:00:00.117) 0:00:10.295 ********** 2026-04-05 01:21:16.109496 | orchestrator | ok: [testbed-node-0] 2026-04-05 01:21:16.109503 | orchestrator | 2026-04-05 01:21:16.109510 | orchestrator | TASK [Set fsid test vars] ****************************************************** 2026-04-05 01:21:16.109516 | orchestrator | Sunday 05 April 2026 01:21:09 +0000 (0:00:00.325) 0:00:10.620 ********** 2026-04-05 01:21:16.109523 | orchestrator | ok: [testbed-node-0] 2026-04-05 01:21:16.109530 | orchestrator | 2026-04-05 01:21:16.109537 | orchestrator | TASK [Fail Cluster FSID test if FSID does not match configuration] ************* 2026-04-05 01:21:16.109543 | orchestrator | Sunday 05 April 2026 01:21:09 +0000 (0:00:00.305) 0:00:10.926 ********** 2026-04-05 01:21:16.109550 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:21:16.109557 | orchestrator | 2026-04-05 01:21:16.109563 | orchestrator | TASK [Pass Cluster FSID test if it matches configuration] ********************** 2026-04-05 01:21:16.109570 | orchestrator | Sunday 05 April 2026 01:21:09 +0000 (0:00:00.131) 0:00:11.057 ********** 2026-04-05 01:21:16.109577 | orchestrator | ok: [testbed-node-0] 2026-04-05 01:21:16.109583 | orchestrator | 2026-04-05 01:21:16.109590 | orchestrator | TASK [Prepare status test vars] ************************************************ 2026-04-05 01:21:16.109597 | orchestrator | Sunday 05 April 2026 01:21:09 +0000 (0:00:00.137) 0:00:11.195 ********** 2026-04-05 01:21:16.109603 | orchestrator | ok: [testbed-node-0] 2026-04-05 01:21:16.109610 | orchestrator | 2026-04-05 01:21:16.109617 | orchestrator | TASK [Gather status data] ****************************************************** 2026-04-05 01:21:16.109623 | orchestrator | Sunday 05 April 2026 01:21:10 +0000 (0:00:00.359) 0:00:11.554 ********** 2026-04-05 01:21:16.109630 | orchestrator | changed: [testbed-node-0] 2026-04-05 01:21:16.109637 | orchestrator | 2026-04-05 01:21:16.109643 | orchestrator | TASK [Set health test data] **************************************************** 2026-04-05 01:21:16.109650 | orchestrator | Sunday 05 April 2026 01:21:11 +0000 (0:00:01.545) 0:00:13.100 ********** 2026-04-05 01:21:16.109657 | orchestrator | ok: [testbed-node-0] 2026-04-05 01:21:16.109663 | orchestrator | 2026-04-05 01:21:16.109670 | orchestrator | TASK [Fail cluster-health if health is not acceptable] ************************* 2026-04-05 01:21:16.109677 | orchestrator | Sunday 05 April 2026 01:21:12 +0000 (0:00:00.307) 0:00:13.408 ********** 2026-04-05 01:21:16.109683 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:21:16.109690 | orchestrator | 2026-04-05 01:21:16.109697 | orchestrator | TASK [Pass cluster-health if health is acceptable] ***************************** 2026-04-05 01:21:16.109703 | orchestrator | Sunday 05 April 2026 01:21:12 +0000 (0:00:00.153) 0:00:13.562 ********** 2026-04-05 01:21:16.109710 | orchestrator | ok: [testbed-node-0] 2026-04-05 01:21:16.109716 | orchestrator | 2026-04-05 01:21:16.109723 | orchestrator | TASK [Fail cluster-health if health is not acceptable (strict)] **************** 2026-04-05 01:21:16.109730 | orchestrator | Sunday 05 April 2026 01:21:12 +0000 (0:00:00.161) 0:00:13.723 ********** 2026-04-05 01:21:16.109736 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:21:16.109743 | orchestrator | 2026-04-05 01:21:16.109750 | orchestrator | TASK [Pass cluster-health if status is OK (strict)] **************************** 2026-04-05 01:21:16.109756 | orchestrator | Sunday 05 April 2026 01:21:12 +0000 (0:00:00.141) 0:00:13.864 ********** 2026-04-05 01:21:16.109763 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:21:16.109769 | orchestrator | 2026-04-05 01:21:16.109784 | orchestrator | TASK [Set validation result to passed if no test failed] *********************** 2026-04-05 01:21:16.109791 | orchestrator | Sunday 05 April 2026 01:21:12 +0000 (0:00:00.134) 0:00:13.998 ********** 2026-04-05 01:21:16.109797 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-04-05 01:21:16.109804 | orchestrator | 2026-04-05 01:21:16.109811 | orchestrator | TASK [Set validation result to failed if a test failed] ************************ 2026-04-05 01:21:16.109823 | orchestrator | Sunday 05 April 2026 01:21:12 +0000 (0:00:00.389) 0:00:14.389 ********** 2026-04-05 01:21:16.109829 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:21:16.109857 | orchestrator | 2026-04-05 01:21:16.109874 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2026-04-05 01:21:16.109883 | orchestrator | Sunday 05 April 2026 01:21:13 +0000 (0:00:00.257) 0:00:14.647 ********** 2026-04-05 01:21:16.109890 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-04-05 01:21:16.109896 | orchestrator | 2026-04-05 01:21:16.109903 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2026-04-05 01:21:16.109910 | orchestrator | Sunday 05 April 2026 01:21:15 +0000 (0:00:01.854) 0:00:16.501 ********** 2026-04-05 01:21:16.109916 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-04-05 01:21:16.109923 | orchestrator | 2026-04-05 01:21:16.109930 | orchestrator | TASK [Aggregate test results step three] *************************************** 2026-04-05 01:21:16.109937 | orchestrator | Sunday 05 April 2026 01:21:15 +0000 (0:00:00.292) 0:00:16.793 ********** 2026-04-05 01:21:16.109943 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-04-05 01:21:16.109950 | orchestrator | 2026-04-05 01:21:16.109962 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-04-05 01:21:18.444030 | orchestrator | Sunday 05 April 2026 01:21:16 +0000 (0:00:00.696) 0:00:17.490 ********** 2026-04-05 01:21:18.444129 | orchestrator | 2026-04-05 01:21:18.444145 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-04-05 01:21:18.444157 | orchestrator | Sunday 05 April 2026 01:21:16 +0000 (0:00:00.073) 0:00:17.563 ********** 2026-04-05 01:21:18.444168 | orchestrator | 2026-04-05 01:21:18.444179 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-04-05 01:21:18.444190 | orchestrator | Sunday 05 April 2026 01:21:16 +0000 (0:00:00.089) 0:00:17.653 ********** 2026-04-05 01:21:18.444201 | orchestrator | 2026-04-05 01:21:18.444212 | orchestrator | RUNNING HANDLER [Write report file] ******************************************** 2026-04-05 01:21:18.444223 | orchestrator | Sunday 05 April 2026 01:21:16 +0000 (0:00:00.083) 0:00:17.737 ********** 2026-04-05 01:21:18.444234 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-04-05 01:21:18.444245 | orchestrator | 2026-04-05 01:21:18.444256 | orchestrator | TASK [Print report file information] ******************************************* 2026-04-05 01:21:18.444266 | orchestrator | Sunday 05 April 2026 01:21:17 +0000 (0:00:01.344) 0:00:19.082 ********** 2026-04-05 01:21:18.444277 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => { 2026-04-05 01:21:18.444288 | orchestrator |  "msg": [ 2026-04-05 01:21:18.444319 | orchestrator |  "Validator run completed.", 2026-04-05 01:21:18.444331 | orchestrator |  "You can find the report file here:", 2026-04-05 01:21:18.444342 | orchestrator |  "/opt/reports/validator/ceph-mons-validator-2026-04-05T01:21:00+00:00-report.json", 2026-04-05 01:21:18.444353 | orchestrator |  "on the following host:", 2026-04-05 01:21:18.444365 | orchestrator |  "testbed-manager" 2026-04-05 01:21:18.444375 | orchestrator |  ] 2026-04-05 01:21:18.444387 | orchestrator | } 2026-04-05 01:21:18.444398 | orchestrator | 2026-04-05 01:21:18.444409 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-05 01:21:18.444421 | orchestrator | testbed-node-0 : ok=24  changed=5  unreachable=0 failed=0 skipped=13  rescued=0 ignored=0 2026-04-05 01:21:18.444433 | orchestrator | testbed-node-1 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-05 01:21:18.444444 | orchestrator | testbed-node-2 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-05 01:21:18.444455 | orchestrator | 2026-04-05 01:21:18.444466 | orchestrator | 2026-04-05 01:21:18.444477 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-05 01:21:18.444511 | orchestrator | Sunday 05 April 2026 01:21:18 +0000 (0:00:00.427) 0:00:19.509 ********** 2026-04-05 01:21:18.444523 | orchestrator | =============================================================================== 2026-04-05 01:21:18.444533 | orchestrator | Aggregate test results step one ----------------------------------------- 1.85s 2026-04-05 01:21:18.444544 | orchestrator | Get monmap info from one mon container ---------------------------------- 1.71s 2026-04-05 01:21:18.444555 | orchestrator | Get container info ------------------------------------------------------ 1.59s 2026-04-05 01:21:18.444566 | orchestrator | Gather status data ------------------------------------------------------ 1.55s 2026-04-05 01:21:18.444579 | orchestrator | Write report file ------------------------------------------------------- 1.34s 2026-04-05 01:21:18.444592 | orchestrator | Get timestamp for report file ------------------------------------------- 1.03s 2026-04-05 01:21:18.444604 | orchestrator | Create report output directory ------------------------------------------ 0.83s 2026-04-05 01:21:18.444617 | orchestrator | Aggregate test results step three --------------------------------------- 0.70s 2026-04-05 01:21:18.444629 | orchestrator | Set test result to failed if ceph-mon is not running -------------------- 0.48s 2026-04-05 01:21:18.444643 | orchestrator | Flush handlers ---------------------------------------------------------- 0.45s 2026-04-05 01:21:18.444655 | orchestrator | Print report file information ------------------------------------------- 0.43s 2026-04-05 01:21:18.444668 | orchestrator | Set validation result to passed if no test failed ----------------------- 0.39s 2026-04-05 01:21:18.444680 | orchestrator | Prepare status test vars ------------------------------------------------ 0.36s 2026-04-05 01:21:18.444692 | orchestrator | Set test result to passed if container is existing ---------------------- 0.36s 2026-04-05 01:21:18.444705 | orchestrator | Pass quorum test if all monitors are in quorum -------------------------- 0.33s 2026-04-05 01:21:18.444718 | orchestrator | Set quorum test data ---------------------------------------------------- 0.32s 2026-04-05 01:21:18.444731 | orchestrator | Set test result to passed if ceph-mon is running ------------------------ 0.31s 2026-04-05 01:21:18.444743 | orchestrator | Prepare test data ------------------------------------------------------- 0.31s 2026-04-05 01:21:18.444755 | orchestrator | Set health test data ---------------------------------------------------- 0.31s 2026-04-05 01:21:18.444767 | orchestrator | Set fsid test vars ------------------------------------------------------ 0.31s 2026-04-05 01:21:18.675086 | orchestrator | + osism validate ceph-mgrs 2026-04-05 01:21:49.293572 | orchestrator | 2026-04-05 01:21:49.293701 | orchestrator | PLAY [Ceph validate mgrs] ****************************************************** 2026-04-05 01:21:49.293731 | orchestrator | 2026-04-05 01:21:49.293751 | orchestrator | TASK [Get timestamp for report file] ******************************************* 2026-04-05 01:21:49.293769 | orchestrator | Sunday 05 April 2026 01:21:33 +0000 (0:00:00.552) 0:00:00.552 ********** 2026-04-05 01:21:49.293787 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-04-05 01:21:49.293804 | orchestrator | 2026-04-05 01:21:49.293820 | orchestrator | TASK [Create report output directory] ****************************************** 2026-04-05 01:21:49.293837 | orchestrator | Sunday 05 April 2026 01:21:35 +0000 (0:00:01.063) 0:00:01.616 ********** 2026-04-05 01:21:49.293941 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-04-05 01:21:49.293962 | orchestrator | 2026-04-05 01:21:49.293981 | orchestrator | TASK [Define report vars] ****************************************************** 2026-04-05 01:21:49.294002 | orchestrator | Sunday 05 April 2026 01:21:35 +0000 (0:00:00.705) 0:00:02.322 ********** 2026-04-05 01:21:49.294102 | orchestrator | ok: [testbed-node-0] 2026-04-05 01:21:49.294142 | orchestrator | 2026-04-05 01:21:49.294181 | orchestrator | TASK [Prepare test data for container existance test] ************************** 2026-04-05 01:21:49.294218 | orchestrator | Sunday 05 April 2026 01:21:35 +0000 (0:00:00.123) 0:00:02.445 ********** 2026-04-05 01:21:49.294241 | orchestrator | ok: [testbed-node-0] 2026-04-05 01:21:49.294260 | orchestrator | ok: [testbed-node-1] 2026-04-05 01:21:49.294279 | orchestrator | ok: [testbed-node-2] 2026-04-05 01:21:49.294330 | orchestrator | 2026-04-05 01:21:49.294350 | orchestrator | TASK [Get container info] ****************************************************** 2026-04-05 01:21:49.294370 | orchestrator | Sunday 05 April 2026 01:21:36 +0000 (0:00:00.314) 0:00:02.759 ********** 2026-04-05 01:21:49.294391 | orchestrator | ok: [testbed-node-1] 2026-04-05 01:21:49.294411 | orchestrator | ok: [testbed-node-0] 2026-04-05 01:21:49.294430 | orchestrator | ok: [testbed-node-2] 2026-04-05 01:21:49.294451 | orchestrator | 2026-04-05 01:21:49.294472 | orchestrator | TASK [Set test result to failed if container is missing] *********************** 2026-04-05 01:21:49.294512 | orchestrator | Sunday 05 April 2026 01:21:37 +0000 (0:00:01.602) 0:00:04.361 ********** 2026-04-05 01:21:49.294536 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:21:49.294556 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:21:49.294576 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:21:49.294594 | orchestrator | 2026-04-05 01:21:49.294612 | orchestrator | TASK [Set test result to passed if container is existing] ********************** 2026-04-05 01:21:49.294631 | orchestrator | Sunday 05 April 2026 01:21:38 +0000 (0:00:00.317) 0:00:04.679 ********** 2026-04-05 01:21:49.294652 | orchestrator | ok: [testbed-node-0] 2026-04-05 01:21:49.294675 | orchestrator | ok: [testbed-node-1] 2026-04-05 01:21:49.294697 | orchestrator | ok: [testbed-node-2] 2026-04-05 01:21:49.294719 | orchestrator | 2026-04-05 01:21:49.294736 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-04-05 01:21:49.294754 | orchestrator | Sunday 05 April 2026 01:21:38 +0000 (0:00:00.320) 0:00:05.000 ********** 2026-04-05 01:21:49.294772 | orchestrator | ok: [testbed-node-0] 2026-04-05 01:21:49.294791 | orchestrator | ok: [testbed-node-1] 2026-04-05 01:21:49.294812 | orchestrator | ok: [testbed-node-2] 2026-04-05 01:21:49.294831 | orchestrator | 2026-04-05 01:21:49.294881 | orchestrator | TASK [Set test result to failed if ceph-mgr is not running] ******************** 2026-04-05 01:21:49.294903 | orchestrator | Sunday 05 April 2026 01:21:38 +0000 (0:00:00.366) 0:00:05.366 ********** 2026-04-05 01:21:49.294923 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:21:49.294943 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:21:49.294967 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:21:49.294995 | orchestrator | 2026-04-05 01:21:49.295023 | orchestrator | TASK [Set test result to passed if ceph-mgr is running] ************************ 2026-04-05 01:21:49.295051 | orchestrator | Sunday 05 April 2026 01:21:39 +0000 (0:00:00.547) 0:00:05.914 ********** 2026-04-05 01:21:49.295077 | orchestrator | ok: [testbed-node-0] 2026-04-05 01:21:49.295103 | orchestrator | ok: [testbed-node-1] 2026-04-05 01:21:49.295127 | orchestrator | ok: [testbed-node-2] 2026-04-05 01:21:49.295145 | orchestrator | 2026-04-05 01:21:49.295164 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2026-04-05 01:21:49.295182 | orchestrator | Sunday 05 April 2026 01:21:39 +0000 (0:00:00.363) 0:00:06.278 ********** 2026-04-05 01:21:49.295199 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:21:49.295216 | orchestrator | 2026-04-05 01:21:49.295234 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2026-04-05 01:21:49.295252 | orchestrator | Sunday 05 April 2026 01:21:39 +0000 (0:00:00.253) 0:00:06.531 ********** 2026-04-05 01:21:49.295270 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:21:49.295290 | orchestrator | 2026-04-05 01:21:49.295307 | orchestrator | TASK [Aggregate test results step three] *************************************** 2026-04-05 01:21:49.295324 | orchestrator | Sunday 05 April 2026 01:21:40 +0000 (0:00:00.255) 0:00:06.786 ********** 2026-04-05 01:21:49.295342 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:21:49.295354 | orchestrator | 2026-04-05 01:21:49.295365 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-04-05 01:21:49.295461 | orchestrator | Sunday 05 April 2026 01:21:40 +0000 (0:00:00.290) 0:00:07.077 ********** 2026-04-05 01:21:49.295473 | orchestrator | 2026-04-05 01:21:49.295484 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-04-05 01:21:49.295495 | orchestrator | Sunday 05 April 2026 01:21:40 +0000 (0:00:00.078) 0:00:07.156 ********** 2026-04-05 01:21:49.295525 | orchestrator | 2026-04-05 01:21:49.295536 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-04-05 01:21:49.295547 | orchestrator | Sunday 05 April 2026 01:21:40 +0000 (0:00:00.077) 0:00:07.234 ********** 2026-04-05 01:21:49.295558 | orchestrator | 2026-04-05 01:21:49.295576 | orchestrator | TASK [Print report file information] ******************************************* 2026-04-05 01:21:49.295595 | orchestrator | Sunday 05 April 2026 01:21:40 +0000 (0:00:00.318) 0:00:07.552 ********** 2026-04-05 01:21:49.295613 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:21:49.295632 | orchestrator | 2026-04-05 01:21:49.295650 | orchestrator | TASK [Fail due to missing containers] ****************************************** 2026-04-05 01:21:49.295670 | orchestrator | Sunday 05 April 2026 01:21:41 +0000 (0:00:00.266) 0:00:07.819 ********** 2026-04-05 01:21:49.295690 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:21:49.295709 | orchestrator | 2026-04-05 01:21:49.295754 | orchestrator | TASK [Define mgr module test vars] ********************************************* 2026-04-05 01:21:49.295766 | orchestrator | Sunday 05 April 2026 01:21:41 +0000 (0:00:00.271) 0:00:08.090 ********** 2026-04-05 01:21:49.295776 | orchestrator | ok: [testbed-node-0] 2026-04-05 01:21:49.295786 | orchestrator | 2026-04-05 01:21:49.295796 | orchestrator | TASK [Gather list of mgr modules] ********************************************** 2026-04-05 01:21:49.295805 | orchestrator | Sunday 05 April 2026 01:21:41 +0000 (0:00:00.132) 0:00:08.223 ********** 2026-04-05 01:21:49.295815 | orchestrator | changed: [testbed-node-0] 2026-04-05 01:21:49.295824 | orchestrator | 2026-04-05 01:21:49.295834 | orchestrator | TASK [Parse mgr module list from json] ***************************************** 2026-04-05 01:21:49.295844 | orchestrator | Sunday 05 April 2026 01:21:43 +0000 (0:00:01.700) 0:00:09.924 ********** 2026-04-05 01:21:49.295888 | orchestrator | ok: [testbed-node-0] 2026-04-05 01:21:49.295899 | orchestrator | 2026-04-05 01:21:49.295909 | orchestrator | TASK [Extract list of enabled mgr modules] ************************************* 2026-04-05 01:21:49.295919 | orchestrator | Sunday 05 April 2026 01:21:43 +0000 (0:00:00.272) 0:00:10.196 ********** 2026-04-05 01:21:49.295929 | orchestrator | ok: [testbed-node-0] 2026-04-05 01:21:49.295938 | orchestrator | 2026-04-05 01:21:49.295948 | orchestrator | TASK [Fail test if mgr modules are disabled that should be enabled] ************ 2026-04-05 01:21:49.295958 | orchestrator | Sunday 05 April 2026 01:21:43 +0000 (0:00:00.316) 0:00:10.512 ********** 2026-04-05 01:21:49.295967 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:21:49.295977 | orchestrator | 2026-04-05 01:21:49.295987 | orchestrator | TASK [Pass test if required mgr modules are enabled] *************************** 2026-04-05 01:21:49.295996 | orchestrator | Sunday 05 April 2026 01:21:44 +0000 (0:00:00.147) 0:00:10.660 ********** 2026-04-05 01:21:49.296006 | orchestrator | ok: [testbed-node-0] 2026-04-05 01:21:49.296016 | orchestrator | 2026-04-05 01:21:49.296025 | orchestrator | TASK [Set validation result to passed if no test failed] *********************** 2026-04-05 01:21:49.296035 | orchestrator | Sunday 05 April 2026 01:21:44 +0000 (0:00:00.155) 0:00:10.815 ********** 2026-04-05 01:21:49.296045 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-04-05 01:21:49.296055 | orchestrator | 2026-04-05 01:21:49.296065 | orchestrator | TASK [Set validation result to failed if a test failed] ************************ 2026-04-05 01:21:49.296074 | orchestrator | Sunday 05 April 2026 01:21:44 +0000 (0:00:00.294) 0:00:11.110 ********** 2026-04-05 01:21:49.296084 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:21:49.296094 | orchestrator | 2026-04-05 01:21:49.296103 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2026-04-05 01:21:49.296113 | orchestrator | Sunday 05 April 2026 01:21:44 +0000 (0:00:00.263) 0:00:11.374 ********** 2026-04-05 01:21:49.296136 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-04-05 01:21:49.296146 | orchestrator | 2026-04-05 01:21:49.296156 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2026-04-05 01:21:49.296165 | orchestrator | Sunday 05 April 2026 01:21:46 +0000 (0:00:01.796) 0:00:13.170 ********** 2026-04-05 01:21:49.296175 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-04-05 01:21:49.296193 | orchestrator | 2026-04-05 01:21:49.296203 | orchestrator | TASK [Aggregate test results step three] *************************************** 2026-04-05 01:21:49.296213 | orchestrator | Sunday 05 April 2026 01:21:46 +0000 (0:00:00.286) 0:00:13.456 ********** 2026-04-05 01:21:49.296223 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-04-05 01:21:49.296232 | orchestrator | 2026-04-05 01:21:49.296242 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-04-05 01:21:49.296251 | orchestrator | Sunday 05 April 2026 01:21:47 +0000 (0:00:00.278) 0:00:13.735 ********** 2026-04-05 01:21:49.296261 | orchestrator | 2026-04-05 01:21:49.296270 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-04-05 01:21:49.296280 | orchestrator | Sunday 05 April 2026 01:21:47 +0000 (0:00:00.089) 0:00:13.824 ********** 2026-04-05 01:21:49.296289 | orchestrator | 2026-04-05 01:21:49.296299 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-04-05 01:21:49.296309 | orchestrator | Sunday 05 April 2026 01:21:47 +0000 (0:00:00.081) 0:00:13.905 ********** 2026-04-05 01:21:49.296318 | orchestrator | 2026-04-05 01:21:49.296328 | orchestrator | RUNNING HANDLER [Write report file] ******************************************** 2026-04-05 01:21:49.296337 | orchestrator | Sunday 05 April 2026 01:21:47 +0000 (0:00:00.074) 0:00:13.980 ********** 2026-04-05 01:21:49.296347 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-04-05 01:21:49.296356 | orchestrator | 2026-04-05 01:21:49.296366 | orchestrator | TASK [Print report file information] ******************************************* 2026-04-05 01:21:49.296375 | orchestrator | Sunday 05 April 2026 01:21:48 +0000 (0:00:01.464) 0:00:15.445 ********** 2026-04-05 01:21:49.296385 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => { 2026-04-05 01:21:49.296395 | orchestrator |  "msg": [ 2026-04-05 01:21:49.296405 | orchestrator |  "Validator run completed.", 2026-04-05 01:21:49.296415 | orchestrator |  "You can find the report file here:", 2026-04-05 01:21:49.296425 | orchestrator |  "/opt/reports/validator/ceph-mgrs-validator-2026-04-05T01:21:34+00:00-report.json", 2026-04-05 01:21:49.296437 | orchestrator |  "on the following host:", 2026-04-05 01:21:49.296447 | orchestrator |  "testbed-manager" 2026-04-05 01:21:49.296456 | orchestrator |  ] 2026-04-05 01:21:49.296466 | orchestrator | } 2026-04-05 01:21:49.296476 | orchestrator | 2026-04-05 01:21:49.296485 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-05 01:21:49.296496 | orchestrator | testbed-node-0 : ok=19  changed=3  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-04-05 01:21:49.296507 | orchestrator | testbed-node-1 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-05 01:21:49.296526 | orchestrator | testbed-node-2 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-05 01:21:49.660141 | orchestrator | 2026-04-05 01:21:49.660221 | orchestrator | 2026-04-05 01:21:49.660231 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-05 01:21:49.660240 | orchestrator | Sunday 05 April 2026 01:21:49 +0000 (0:00:00.422) 0:00:15.867 ********** 2026-04-05 01:21:49.660248 | orchestrator | =============================================================================== 2026-04-05 01:21:49.660255 | orchestrator | Aggregate test results step one ----------------------------------------- 1.80s 2026-04-05 01:21:49.660262 | orchestrator | Gather list of mgr modules ---------------------------------------------- 1.70s 2026-04-05 01:21:49.660268 | orchestrator | Get container info ------------------------------------------------------ 1.60s 2026-04-05 01:21:49.660275 | orchestrator | Write report file ------------------------------------------------------- 1.47s 2026-04-05 01:21:49.660282 | orchestrator | Get timestamp for report file ------------------------------------------- 1.06s 2026-04-05 01:21:49.660288 | orchestrator | Create report output directory ------------------------------------------ 0.71s 2026-04-05 01:21:49.660315 | orchestrator | Set test result to failed if ceph-mgr is not running -------------------- 0.55s 2026-04-05 01:21:49.660322 | orchestrator | Flush handlers ---------------------------------------------------------- 0.47s 2026-04-05 01:21:49.660328 | orchestrator | Print report file information ------------------------------------------- 0.42s 2026-04-05 01:21:49.660335 | orchestrator | Prepare test data ------------------------------------------------------- 0.37s 2026-04-05 01:21:49.660342 | orchestrator | Set test result to passed if ceph-mgr is running ------------------------ 0.36s 2026-04-05 01:21:49.660348 | orchestrator | Set test result to passed if container is existing ---------------------- 0.32s 2026-04-05 01:21:49.660355 | orchestrator | Set test result to failed if container is missing ----------------------- 0.32s 2026-04-05 01:21:49.660374 | orchestrator | Extract list of enabled mgr modules ------------------------------------- 0.32s 2026-04-05 01:21:49.660382 | orchestrator | Prepare test data for container existance test -------------------------- 0.31s 2026-04-05 01:21:49.660389 | orchestrator | Set validation result to passed if no test failed ----------------------- 0.29s 2026-04-05 01:21:49.660396 | orchestrator | Aggregate test results step three --------------------------------------- 0.29s 2026-04-05 01:21:49.660403 | orchestrator | Aggregate test results step two ----------------------------------------- 0.29s 2026-04-05 01:21:49.660410 | orchestrator | Aggregate test results step three --------------------------------------- 0.28s 2026-04-05 01:21:49.660416 | orchestrator | Parse mgr module list from json ----------------------------------------- 0.27s 2026-04-05 01:21:49.874437 | orchestrator | + osism validate ceph-osds 2026-04-05 01:22:09.351070 | orchestrator | 2026-04-05 01:22:09.351167 | orchestrator | PLAY [Ceph validate OSDs] ****************************************************** 2026-04-05 01:22:09.351177 | orchestrator | 2026-04-05 01:22:09.351183 | orchestrator | TASK [Get timestamp for report file] ******************************************* 2026-04-05 01:22:09.351190 | orchestrator | Sunday 05 April 2026 01:22:04 +0000 (0:00:00.515) 0:00:00.515 ********** 2026-04-05 01:22:09.351196 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-04-05 01:22:09.351202 | orchestrator | 2026-04-05 01:22:09.351207 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-04-05 01:22:09.351213 | orchestrator | Sunday 05 April 2026 01:22:06 +0000 (0:00:01.114) 0:00:01.630 ********** 2026-04-05 01:22:09.351218 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-04-05 01:22:09.351256 | orchestrator | 2026-04-05 01:22:09.351263 | orchestrator | TASK [Create report output directory] ****************************************** 2026-04-05 01:22:09.351269 | orchestrator | Sunday 05 April 2026 01:22:06 +0000 (0:00:00.236) 0:00:01.866 ********** 2026-04-05 01:22:09.351275 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-04-05 01:22:09.351280 | orchestrator | 2026-04-05 01:22:09.351285 | orchestrator | TASK [Define report vars] ****************************************************** 2026-04-05 01:22:09.351291 | orchestrator | Sunday 05 April 2026 01:22:07 +0000 (0:00:00.797) 0:00:02.664 ********** 2026-04-05 01:22:09.351297 | orchestrator | ok: [testbed-node-3] 2026-04-05 01:22:09.351303 | orchestrator | 2026-04-05 01:22:09.351309 | orchestrator | TASK [Define OSD test variables] *********************************************** 2026-04-05 01:22:09.351314 | orchestrator | Sunday 05 April 2026 01:22:07 +0000 (0:00:00.158) 0:00:02.823 ********** 2026-04-05 01:22:09.351319 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:22:09.351324 | orchestrator | 2026-04-05 01:22:09.351330 | orchestrator | TASK [Calculate OSD devices for each host] ************************************* 2026-04-05 01:22:09.351335 | orchestrator | Sunday 05 April 2026 01:22:07 +0000 (0:00:00.139) 0:00:02.962 ********** 2026-04-05 01:22:09.351340 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:22:09.351346 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:22:09.351351 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:22:09.351356 | orchestrator | 2026-04-05 01:22:09.351361 | orchestrator | TASK [Define OSD test variables] *********************************************** 2026-04-05 01:22:09.351366 | orchestrator | Sunday 05 April 2026 01:22:07 +0000 (0:00:00.483) 0:00:03.446 ********** 2026-04-05 01:22:09.351389 | orchestrator | ok: [testbed-node-3] 2026-04-05 01:22:09.351395 | orchestrator | 2026-04-05 01:22:09.351400 | orchestrator | TASK [Calculate OSD devices for each host] ************************************* 2026-04-05 01:22:09.351406 | orchestrator | Sunday 05 April 2026 01:22:08 +0000 (0:00:00.145) 0:00:03.591 ********** 2026-04-05 01:22:09.351411 | orchestrator | ok: [testbed-node-3] 2026-04-05 01:22:09.351416 | orchestrator | ok: [testbed-node-4] 2026-04-05 01:22:09.351421 | orchestrator | ok: [testbed-node-5] 2026-04-05 01:22:09.351426 | orchestrator | 2026-04-05 01:22:09.351431 | orchestrator | TASK [Calculate total number of OSDs in cluster] ******************************* 2026-04-05 01:22:09.351436 | orchestrator | Sunday 05 April 2026 01:22:08 +0000 (0:00:00.345) 0:00:03.937 ********** 2026-04-05 01:22:09.351441 | orchestrator | ok: [testbed-node-3] 2026-04-05 01:22:09.351446 | orchestrator | 2026-04-05 01:22:09.351451 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-04-05 01:22:09.351456 | orchestrator | Sunday 05 April 2026 01:22:08 +0000 (0:00:00.398) 0:00:04.335 ********** 2026-04-05 01:22:09.351461 | orchestrator | ok: [testbed-node-3] 2026-04-05 01:22:09.351466 | orchestrator | ok: [testbed-node-4] 2026-04-05 01:22:09.351472 | orchestrator | ok: [testbed-node-5] 2026-04-05 01:22:09.351477 | orchestrator | 2026-04-05 01:22:09.351482 | orchestrator | TASK [Get list of ceph-osd containers on host] ********************************* 2026-04-05 01:22:09.351487 | orchestrator | Sunday 05 April 2026 01:22:09 +0000 (0:00:00.319) 0:00:04.654 ********** 2026-04-05 01:22:09.351494 | orchestrator | skipping: [testbed-node-3] => (item={'id': '07c68398f9ab9ccedd83496f76f7e1fb3903d7b83228b6626ae35bb63ad9cb25', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'name': '/nova_compute', 'state': 'running', 'status': 'Up 9 minutes (healthy)'})  2026-04-05 01:22:09.351503 | orchestrator | skipping: [testbed-node-3] => (item={'id': '14dedfce77c144a721e86e546c4dd841f7e3f3932fbac406b9a7a025989ba533', 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'name': '/nova_libvirt', 'state': 'running', 'status': 'Up 9 minutes (healthy)'})  2026-04-05 01:22:09.351508 | orchestrator | skipping: [testbed-node-3] => (item={'id': '54a5260052cec2319cd68c6d54795ad4275ca4e0368fdff12d55cff86d6c28dd', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'name': '/nova_ssh', 'state': 'running', 'status': 'Up 10 minutes (healthy)'})  2026-04-05 01:22:09.351526 | orchestrator | skipping: [testbed-node-3] => (item={'id': '69f63ff2d76a01d9af1e53b4b2270985342cabbbc0ce758a2bf49745d9d67725', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'name': '/neutron_ovn_metadata_agent', 'state': 'running', 'status': 'Up 10 minutes (healthy)'})  2026-04-05 01:22:09.351538 | orchestrator | skipping: [testbed-node-3] => (item={'id': '2666805304e61c1b095b46dc7e548bd33b06003d7cee086c1fd79b6659796b5d', 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'name': '/prometheus_libvirt_exporter', 'state': 'running', 'status': 'Up 15 minutes'})  2026-04-05 01:22:09.351561 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'ea2edea4960875e70b4d4af1bd0a1ab1e36a93d42eea1ff096bd82ab3f07a52f', 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'name': '/prometheus_cadvisor', 'state': 'running', 'status': 'Up 15 minutes'})  2026-04-05 01:22:09.351570 | orchestrator | skipping: [testbed-node-3] => (item={'id': '2790c296435b3935afd488b599444b8022cb8d3c12f8ed62c623e8e08eb40cc7', 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'name': '/prometheus_node_exporter', 'state': 'running', 'status': 'Up 16 minutes'})  2026-04-05 01:22:09.351580 | orchestrator | skipping: [testbed-node-3] => (item={'id': '8bd6a485e0d4214a4db6668e6479df22ce2a6bb0b8efcf58962180b5b8b5efe5', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-rgw-default-testbed-node-3-rgw0', 'state': 'running', 'status': 'Up 22 minutes'})  2026-04-05 01:22:09.351587 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'cba9041e8ce7b223f072a00261f076ad536cdfc39f64c2201b2e1d578dc9808d', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-mds-testbed-node-3', 'state': 'running', 'status': 'Up 23 minutes'})  2026-04-05 01:22:09.351602 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'acf79a6524933b9d008bcb5021f99cd99943b358beabb9a7f0bdbd23097d1ea5', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-crash-testbed-node-3', 'state': 'running', 'status': 'Up 24 minutes'})  2026-04-05 01:22:09.351611 | orchestrator | ok: [testbed-node-3] => (item={'id': '3903c6321cf300a1c6f6d3bf656b4a84515b8b3b1adfb3a36ca334667888978e', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-osd-4', 'state': 'running', 'status': 'Up 25 minutes'}) 2026-04-05 01:22:09.351619 | orchestrator | ok: [testbed-node-3] => (item={'id': '1c66dab7989e96311d8b85d7a5bb52f7bff01a8a1e57076e7eb29437dfcd8ee5', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-osd-0', 'state': 'running', 'status': 'Up 25 minutes'}) 2026-04-05 01:22:09.351626 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'c84ef41097d35e38247f180c9f2d4b67d6d0a81c18ce77327e134a1d795db7e8', 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'name': '/ovn_controller', 'state': 'running', 'status': 'Up 28 minutes'})  2026-04-05 01:22:09.351635 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'adb8631022731e33c31ff5b6e8ce818287c20de7159052cfdb7122874d0d0d3e', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'name': '/openvswitch_vswitchd', 'state': 'running', 'status': 'Up 29 minutes (healthy)'})  2026-04-05 01:22:09.351643 | orchestrator | skipping: [testbed-node-3] => (item={'id': '69c3b977727a34c5c79f65d48be013287bb31655471afbdc2b35559144b9e2bf', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'name': '/openvswitch_db', 'state': 'running', 'status': 'Up 30 minutes (healthy)'})  2026-04-05 01:22:09.351651 | orchestrator | skipping: [testbed-node-3] => (item={'id': '65da2b953f649ec24a44cfc4a305df1fa4610434f6ee914804e450a4f4e893db', 'image': 'registry.osism.tech/kolla/cron:2024.2', 'name': '/cron', 'state': 'running', 'status': 'Up 31 minutes'})  2026-04-05 01:22:09.351658 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'b65b85d8255a07a495a602eae8118609f73a4cf62fcb22ff4a35a65b1d9f8591', 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'name': '/kolla_toolbox', 'state': 'running', 'status': 'Up 31 minutes'})  2026-04-05 01:22:09.351666 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'a9dbb421cce7a2f54093a6639d8725f2b4625773e1b7271916113ea5e6b09fea', 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'name': '/fluentd', 'state': 'running', 'status': 'Up 32 minutes'})  2026-04-05 01:22:09.351674 | orchestrator | skipping: [testbed-node-4] => (item={'id': '93ee2c854061048c333abe4b7bb12bc8e84c47f76780640ffe4284698c004725', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'name': '/nova_compute', 'state': 'running', 'status': 'Up 9 minutes (healthy)'})  2026-04-05 01:22:09.351682 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'b44a30f0852ae88f35a3b1231c2086f1efd9358dd92a55bdc15fa2c1d983f1a0', 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'name': '/nova_libvirt', 'state': 'running', 'status': 'Up 9 minutes (healthy)'})  2026-04-05 01:22:09.351696 | orchestrator | skipping: [testbed-node-4] => (item={'id': '6a23395eb5da789be44813a6e1d9107d5ea99f3e3deaa30bd073d1ea8d16f7e8', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'name': '/neutron_ovn_metadata_agent', 'state': 'running', 'status': 'Up 10 minutes (healthy)'})  2026-04-05 01:22:09.351711 | orchestrator | skipping: [testbed-node-4] => (item={'id': '8f3c883563f170220b3c541384847aef1d1ab5d42fc304057ec86132e4e061e2', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'name': '/nova_ssh', 'state': 'running', 'status': 'Up 10 minutes (healthy)'})  2026-04-05 01:22:09.566638 | orchestrator | skipping: [testbed-node-4] => (item={'id': '52c5c993cf23c4caceb601e2650a7c675d3a3faef8adc1b2a899f67ca16a8c15', 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'name': '/prometheus_libvirt_exporter', 'state': 'running', 'status': 'Up 15 minutes'})  2026-04-05 01:22:09.566763 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'f8d83745627e0c06ff9682941f6fba38324792f33a8cccf5006e806b431cbe3f', 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'name': '/prometheus_cadvisor', 'state': 'running', 'status': 'Up 15 minutes'})  2026-04-05 01:22:09.566780 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'bfa1fefbdb3faae36a8a5425056062322f305e3f3dda6571670958ae92665901', 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'name': '/prometheus_node_exporter', 'state': 'running', 'status': 'Up 16 minutes'})  2026-04-05 01:22:09.566793 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'bed1ec199875aa0256368cd61f36d01935eed422cb722480ff53fd8ec6a49e89', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-rgw-default-testbed-node-4-rgw0', 'state': 'running', 'status': 'Up 22 minutes'})  2026-04-05 01:22:09.566805 | orchestrator | skipping: [testbed-node-4] => (item={'id': '3e557d89bf868997bb225c762c8859947cc6886d9907cc67e417538293c67ceb', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-mds-testbed-node-4', 'state': 'running', 'status': 'Up 23 minutes'})  2026-04-05 01:22:09.566816 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'f4134aee66a8d89714e56135c304f532d37e3e0b9b421d1a09b931b89c44aa6e', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-crash-testbed-node-4', 'state': 'running', 'status': 'Up 24 minutes'})  2026-04-05 01:22:09.566830 | orchestrator | ok: [testbed-node-4] => (item={'id': '1ad35f80ac00d4ae9f8967b238afcd99d03f12c082b1d61f7bbe4ee5f55cf975', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-osd-3', 'state': 'running', 'status': 'Up 25 minutes'}) 2026-04-05 01:22:09.566843 | orchestrator | ok: [testbed-node-4] => (item={'id': '9e837de8731c74548874e42ae9795f1c030bcf61c10e61c3a7060f4d929ed82c', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-osd-1', 'state': 'running', 'status': 'Up 25 minutes'}) 2026-04-05 01:22:09.566855 | orchestrator | skipping: [testbed-node-4] => (item={'id': '87a335c3590a42f518ad8ce9c318a44e30caacd195e9516b5815f49e8ed8898c', 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'name': '/ovn_controller', 'state': 'running', 'status': 'Up 28 minutes'})  2026-04-05 01:22:09.566936 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'cc31dedcaf797d105edd0d7bc57e21d299e4e9b3c06ba3f48bbcbd711155200d', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'name': '/openvswitch_vswitchd', 'state': 'running', 'status': 'Up 30 minutes (healthy)'})  2026-04-05 01:22:09.566949 | orchestrator | skipping: [testbed-node-4] => (item={'id': '0a0720411629492d2da8afec0489ce921834313861ca8746d6d1ccb25c86435f', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'name': '/openvswitch_db', 'state': 'running', 'status': 'Up 30 minutes (healthy)'})  2026-04-05 01:22:09.566961 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'c5257d91621cc78ccb0ac1d5f199af1761be52e5a01636fd5594b0e9bb12a7c7', 'image': 'registry.osism.tech/kolla/cron:2024.2', 'name': '/cron', 'state': 'running', 'status': 'Up 31 minutes'})  2026-04-05 01:22:09.566972 | orchestrator | skipping: [testbed-node-4] => (item={'id': '52b2a8d3b1fdda6f1deb091cbbdf076c90cbb25ab927606cd6b4c5ef0a972a8f', 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'name': '/kolla_toolbox', 'state': 'running', 'status': 'Up 31 minutes'})  2026-04-05 01:22:09.566998 | orchestrator | skipping: [testbed-node-4] => (item={'id': '2ca2bb640df053ad3195aeccba8916900f2fa0988121a1e1000cafcd43eb718c', 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'name': '/fluentd', 'state': 'running', 'status': 'Up 32 minutes'})  2026-04-05 01:22:09.567010 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'ad25240e3ef5ff1ba9066bb3f340ba2d931e6e836c47349696885c6dee74e1b4', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'name': '/nova_compute', 'state': 'running', 'status': 'Up 9 minutes (healthy)'})  2026-04-05 01:22:09.567048 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'c0f481d9933b69c1223f58d698e68985bcf5a5fbbb43427d67eef95d87c5b496', 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'name': '/nova_libvirt', 'state': 'running', 'status': 'Up 9 minutes (healthy)'})  2026-04-05 01:22:09.567061 | orchestrator | skipping: [testbed-node-5] => (item={'id': '2f5cfb68f4d705ff5684bb7eaa46afcff53b0551856fe80e3bc242f3c0bbeb04', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'name': '/nova_ssh', 'state': 'running', 'status': 'Up 10 minutes (healthy)'})  2026-04-05 01:22:09.567072 | orchestrator | skipping: [testbed-node-5] => (item={'id': '8a2c9d0bec0d9bc83029dbe99b69a88dfb2655ae8ad609e294c54631fa0ba2e3', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'name': '/neutron_ovn_metadata_agent', 'state': 'running', 'status': 'Up 10 minutes (healthy)'})  2026-04-05 01:22:09.567084 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'db03e7a7feb28f4f4156d6ca6bd0ec300ebebe650f67993f3fed0d2de557e0d2', 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'name': '/prometheus_libvirt_exporter', 'state': 'running', 'status': 'Up 15 minutes'})  2026-04-05 01:22:09.567095 | orchestrator | skipping: [testbed-node-5] => (item={'id': '918e2cbb569519053b5a3130df85dedcb5ab370dd19504e2d61d20e337c1314f', 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'name': '/prometheus_cadvisor', 'state': 'running', 'status': 'Up 15 minutes'})  2026-04-05 01:22:09.567106 | orchestrator | skipping: [testbed-node-5] => (item={'id': '2cce31aba726489de06ed0351c10672ed7e29b5666beb2ce0537eda07fc39492', 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'name': '/prometheus_node_exporter', 'state': 'running', 'status': 'Up 16 minutes'})  2026-04-05 01:22:09.567118 | orchestrator | skipping: [testbed-node-5] => (item={'id': '484e14e8d39ae4882e01bb023a9afa17660bfc8f866a8a78c3e3b1d77756e30b', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-rgw-default-testbed-node-5-rgw0', 'state': 'running', 'status': 'Up 22 minutes'})  2026-04-05 01:22:09.567129 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'b36f0f5da119dada55acd0a3c92cf48005ac7de2a725f4d552dfda30cd58a129', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-mds-testbed-node-5', 'state': 'running', 'status': 'Up 23 minutes'})  2026-04-05 01:22:09.567141 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'f0e26e85cffb79e938cab2c96ab410bf57de956457590c06d7bcfd81139f7ceb', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-crash-testbed-node-5', 'state': 'running', 'status': 'Up 24 minutes'})  2026-04-05 01:22:09.567152 | orchestrator | ok: [testbed-node-5] => (item={'id': '996ce3198f92bb703d3a1773792e26a712418ba4fb923848e1293f9968ec88a9', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-osd-2', 'state': 'running', 'status': 'Up 25 minutes'}) 2026-04-05 01:22:09.567164 | orchestrator | ok: [testbed-node-5] => (item={'id': '8ef5716bd11a0e78db16a6a151703cf18b2b010a4d4bd70c41db9de4fe11ac3e', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-osd-5', 'state': 'running', 'status': 'Up 25 minutes'}) 2026-04-05 01:22:09.567177 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'bc73ca51f4d4dd0ec9b57c5cdc85cef00a3e6a5bea9ac8a104c2ce0b32c63f1e', 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'name': '/ovn_controller', 'state': 'running', 'status': 'Up 28 minutes'})  2026-04-05 01:22:09.567190 | orchestrator | skipping: [testbed-node-5] => (item={'id': '084049e5bd4b4627592ca8c38f91118bdbec1e168d96872f9aec7ec32fcaff28', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'name': '/openvswitch_vswitchd', 'state': 'running', 'status': 'Up 30 minutes (healthy)'})  2026-04-05 01:22:09.567208 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'b2cb80e965da34216b28f54f3d58b881a3dff816bf038105968e99456b7ac8ed', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'name': '/openvswitch_db', 'state': 'running', 'status': 'Up 30 minutes (healthy)'})  2026-04-05 01:22:09.567230 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'c27025cf270b89681114449da30b9712cd68c30f836e4e168155dbfbfd8da8ef', 'image': 'registry.osism.tech/kolla/cron:2024.2', 'name': '/cron', 'state': 'running', 'status': 'Up 31 minutes'})  2026-04-05 01:22:09.567244 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'f382c80d718e453529a8afa5baaee2a5cee95b85f62736d38f7b69a65c4d784a', 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'name': '/kolla_toolbox', 'state': 'running', 'status': 'Up 31 minutes'})  2026-04-05 01:22:09.567266 | orchestrator | skipping: [testbed-node-5] => (item={'id': '1bffbad8f06066a313554047080597068aec2155e65c01c1d06836a4ff1f9d27', 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'name': '/fluentd', 'state': 'running', 'status': 'Up 32 minutes'})  2026-04-05 01:22:23.471083 | orchestrator | 2026-04-05 01:22:23.471200 | orchestrator | TASK [Get count of ceph-osd containers on host] ******************************** 2026-04-05 01:22:23.471218 | orchestrator | Sunday 05 April 2026 01:22:09 +0000 (0:00:00.747) 0:00:05.402 ********** 2026-04-05 01:22:23.471230 | orchestrator | ok: [testbed-node-3] 2026-04-05 01:22:23.471242 | orchestrator | ok: [testbed-node-4] 2026-04-05 01:22:23.471253 | orchestrator | ok: [testbed-node-5] 2026-04-05 01:22:23.471264 | orchestrator | 2026-04-05 01:22:23.471275 | orchestrator | TASK [Set test result to failed when count of containers is wrong] ************* 2026-04-05 01:22:23.471287 | orchestrator | Sunday 05 April 2026 01:22:10 +0000 (0:00:00.315) 0:00:05.717 ********** 2026-04-05 01:22:23.471298 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:22:23.471310 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:22:23.471321 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:22:23.471331 | orchestrator | 2026-04-05 01:22:23.471343 | orchestrator | TASK [Set test result to passed if count matches] ****************************** 2026-04-05 01:22:23.471354 | orchestrator | Sunday 05 April 2026 01:22:10 +0000 (0:00:00.338) 0:00:06.056 ********** 2026-04-05 01:22:23.471365 | orchestrator | ok: [testbed-node-3] 2026-04-05 01:22:23.471376 | orchestrator | ok: [testbed-node-4] 2026-04-05 01:22:23.471387 | orchestrator | ok: [testbed-node-5] 2026-04-05 01:22:23.471398 | orchestrator | 2026-04-05 01:22:23.471409 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-04-05 01:22:23.471420 | orchestrator | Sunday 05 April 2026 01:22:10 +0000 (0:00:00.306) 0:00:06.362 ********** 2026-04-05 01:22:23.471431 | orchestrator | ok: [testbed-node-3] 2026-04-05 01:22:23.471442 | orchestrator | ok: [testbed-node-4] 2026-04-05 01:22:23.471452 | orchestrator | ok: [testbed-node-5] 2026-04-05 01:22:23.471464 | orchestrator | 2026-04-05 01:22:23.471475 | orchestrator | TASK [Get list of ceph-osd containers that are not running] ******************** 2026-04-05 01:22:23.471486 | orchestrator | Sunday 05 April 2026 01:22:11 +0000 (0:00:00.475) 0:00:06.837 ********** 2026-04-05 01:22:23.471497 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'ceph-osd-4', 'osd_id': '4', 'state': 'running'})  2026-04-05 01:22:23.471509 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'ceph-osd-0', 'osd_id': '0', 'state': 'running'})  2026-04-05 01:22:23.471520 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:22:23.471532 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'ceph-osd-3', 'osd_id': '3', 'state': 'running'})  2026-04-05 01:22:23.471543 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'ceph-osd-1', 'osd_id': '1', 'state': 'running'})  2026-04-05 01:22:23.471554 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:22:23.471565 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'ceph-osd-2', 'osd_id': '2', 'state': 'running'})  2026-04-05 01:22:23.471576 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'ceph-osd-5', 'osd_id': '5', 'state': 'running'})  2026-04-05 01:22:23.471587 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:22:23.471598 | orchestrator | 2026-04-05 01:22:23.471612 | orchestrator | TASK [Get count of ceph-osd containers that are not running] ******************* 2026-04-05 01:22:23.471625 | orchestrator | Sunday 05 April 2026 01:22:11 +0000 (0:00:00.355) 0:00:07.192 ********** 2026-04-05 01:22:23.471663 | orchestrator | ok: [testbed-node-3] 2026-04-05 01:22:23.471676 | orchestrator | ok: [testbed-node-4] 2026-04-05 01:22:23.471689 | orchestrator | ok: [testbed-node-5] 2026-04-05 01:22:23.471703 | orchestrator | 2026-04-05 01:22:23.471717 | orchestrator | TASK [Set test result to failed if an OSD is not running] ********************** 2026-04-05 01:22:23.471729 | orchestrator | Sunday 05 April 2026 01:22:11 +0000 (0:00:00.335) 0:00:07.527 ********** 2026-04-05 01:22:23.471742 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:22:23.471755 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:22:23.471767 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:22:23.471781 | orchestrator | 2026-04-05 01:22:23.471794 | orchestrator | TASK [Set test result to failed if an OSD is not running] ********************** 2026-04-05 01:22:23.471807 | orchestrator | Sunday 05 April 2026 01:22:12 +0000 (0:00:00.304) 0:00:07.832 ********** 2026-04-05 01:22:23.471820 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:22:23.471832 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:22:23.471845 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:22:23.471857 | orchestrator | 2026-04-05 01:22:23.471899 | orchestrator | TASK [Set test result to passed if all containers are running] ***************** 2026-04-05 01:22:23.471913 | orchestrator | Sunday 05 April 2026 01:22:12 +0000 (0:00:00.571) 0:00:08.403 ********** 2026-04-05 01:22:23.471926 | orchestrator | ok: [testbed-node-3] 2026-04-05 01:22:23.471938 | orchestrator | ok: [testbed-node-4] 2026-04-05 01:22:23.471949 | orchestrator | ok: [testbed-node-5] 2026-04-05 01:22:23.471959 | orchestrator | 2026-04-05 01:22:23.471970 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2026-04-05 01:22:23.471982 | orchestrator | Sunday 05 April 2026 01:22:13 +0000 (0:00:00.320) 0:00:08.724 ********** 2026-04-05 01:22:23.471993 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:22:23.472003 | orchestrator | 2026-04-05 01:22:23.472014 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2026-04-05 01:22:23.472025 | orchestrator | Sunday 05 April 2026 01:22:13 +0000 (0:00:00.262) 0:00:08.987 ********** 2026-04-05 01:22:23.472036 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:22:23.472047 | orchestrator | 2026-04-05 01:22:23.472058 | orchestrator | TASK [Aggregate test results step three] *************************************** 2026-04-05 01:22:23.472068 | orchestrator | Sunday 05 April 2026 01:22:13 +0000 (0:00:00.255) 0:00:09.243 ********** 2026-04-05 01:22:23.472079 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:22:23.472090 | orchestrator | 2026-04-05 01:22:23.472101 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-04-05 01:22:23.472112 | orchestrator | Sunday 05 April 2026 01:22:13 +0000 (0:00:00.278) 0:00:09.522 ********** 2026-04-05 01:22:23.472126 | orchestrator | 2026-04-05 01:22:23.472143 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-04-05 01:22:23.472162 | orchestrator | Sunday 05 April 2026 01:22:14 +0000 (0:00:00.070) 0:00:09.592 ********** 2026-04-05 01:22:23.472180 | orchestrator | 2026-04-05 01:22:23.472198 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-04-05 01:22:23.472239 | orchestrator | Sunday 05 April 2026 01:22:14 +0000 (0:00:00.072) 0:00:09.664 ********** 2026-04-05 01:22:23.472260 | orchestrator | 2026-04-05 01:22:23.472277 | orchestrator | TASK [Print report file information] ******************************************* 2026-04-05 01:22:23.472293 | orchestrator | Sunday 05 April 2026 01:22:14 +0000 (0:00:00.071) 0:00:09.736 ********** 2026-04-05 01:22:23.472311 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:22:23.472328 | orchestrator | 2026-04-05 01:22:23.472345 | orchestrator | TASK [Fail early due to containers not running] ******************************** 2026-04-05 01:22:23.472363 | orchestrator | Sunday 05 April 2026 01:22:14 +0000 (0:00:00.717) 0:00:10.454 ********** 2026-04-05 01:22:23.472382 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:22:23.472401 | orchestrator | 2026-04-05 01:22:23.472419 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-04-05 01:22:23.472438 | orchestrator | Sunday 05 April 2026 01:22:15 +0000 (0:00:00.260) 0:00:10.714 ********** 2026-04-05 01:22:23.472472 | orchestrator | ok: [testbed-node-3] 2026-04-05 01:22:23.472500 | orchestrator | ok: [testbed-node-4] 2026-04-05 01:22:23.472521 | orchestrator | ok: [testbed-node-5] 2026-04-05 01:22:23.472539 | orchestrator | 2026-04-05 01:22:23.472617 | orchestrator | TASK [Set _mon_hostname fact] ************************************************** 2026-04-05 01:22:23.472639 | orchestrator | Sunday 05 April 2026 01:22:15 +0000 (0:00:00.340) 0:00:11.055 ********** 2026-04-05 01:22:23.472658 | orchestrator | ok: [testbed-node-3] 2026-04-05 01:22:23.472675 | orchestrator | 2026-04-05 01:22:23.472693 | orchestrator | TASK [Get ceph osd tree] ******************************************************* 2026-04-05 01:22:23.472710 | orchestrator | Sunday 05 April 2026 01:22:15 +0000 (0:00:00.236) 0:00:11.291 ********** 2026-04-05 01:22:23.472728 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-04-05 01:22:23.472748 | orchestrator | 2026-04-05 01:22:23.472767 | orchestrator | TASK [Parse osd tree from JSON] ************************************************ 2026-04-05 01:22:23.472786 | orchestrator | Sunday 05 April 2026 01:22:17 +0000 (0:00:02.142) 0:00:13.433 ********** 2026-04-05 01:22:23.472804 | orchestrator | ok: [testbed-node-3] 2026-04-05 01:22:23.472822 | orchestrator | 2026-04-05 01:22:23.472841 | orchestrator | TASK [Get OSDs that are not up or in] ****************************************** 2026-04-05 01:22:23.472860 | orchestrator | Sunday 05 April 2026 01:22:17 +0000 (0:00:00.128) 0:00:13.562 ********** 2026-04-05 01:22:23.472919 | orchestrator | ok: [testbed-node-3] 2026-04-05 01:22:23.472931 | orchestrator | 2026-04-05 01:22:23.472942 | orchestrator | TASK [Fail test if OSDs are not up or in] ************************************** 2026-04-05 01:22:23.472953 | orchestrator | Sunday 05 April 2026 01:22:18 +0000 (0:00:00.352) 0:00:13.915 ********** 2026-04-05 01:22:23.472964 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:22:23.472975 | orchestrator | 2026-04-05 01:22:23.472986 | orchestrator | TASK [Pass test if OSDs are all up and in] ************************************* 2026-04-05 01:22:23.472997 | orchestrator | Sunday 05 April 2026 01:22:18 +0000 (0:00:00.165) 0:00:14.080 ********** 2026-04-05 01:22:23.473009 | orchestrator | ok: [testbed-node-3] 2026-04-05 01:22:23.473019 | orchestrator | 2026-04-05 01:22:23.473030 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-04-05 01:22:23.473041 | orchestrator | Sunday 05 April 2026 01:22:18 +0000 (0:00:00.159) 0:00:14.240 ********** 2026-04-05 01:22:23.473052 | orchestrator | ok: [testbed-node-3] 2026-04-05 01:22:23.473063 | orchestrator | ok: [testbed-node-4] 2026-04-05 01:22:23.473075 | orchestrator | ok: [testbed-node-5] 2026-04-05 01:22:23.473085 | orchestrator | 2026-04-05 01:22:23.473096 | orchestrator | TASK [List ceph LVM volumes and collect data] ********************************** 2026-04-05 01:22:23.473107 | orchestrator | Sunday 05 April 2026 01:22:19 +0000 (0:00:00.479) 0:00:14.719 ********** 2026-04-05 01:22:23.473118 | orchestrator | changed: [testbed-node-3] 2026-04-05 01:22:23.473130 | orchestrator | changed: [testbed-node-4] 2026-04-05 01:22:23.473140 | orchestrator | changed: [testbed-node-5] 2026-04-05 01:22:23.473151 | orchestrator | 2026-04-05 01:22:23.473162 | orchestrator | TASK [Parse LVM data as JSON] ************************************************** 2026-04-05 01:22:23.473173 | orchestrator | Sunday 05 April 2026 01:22:20 +0000 (0:00:01.824) 0:00:16.544 ********** 2026-04-05 01:22:23.473197 | orchestrator | ok: [testbed-node-3] 2026-04-05 01:22:23.473208 | orchestrator | ok: [testbed-node-4] 2026-04-05 01:22:23.473219 | orchestrator | ok: [testbed-node-5] 2026-04-05 01:22:23.473230 | orchestrator | 2026-04-05 01:22:23.473241 | orchestrator | TASK [Get unencrypted and encrypted OSDs] ************************************** 2026-04-05 01:22:23.473252 | orchestrator | Sunday 05 April 2026 01:22:21 +0000 (0:00:00.327) 0:00:16.871 ********** 2026-04-05 01:22:23.473263 | orchestrator | ok: [testbed-node-3] 2026-04-05 01:22:23.473274 | orchestrator | ok: [testbed-node-4] 2026-04-05 01:22:23.473285 | orchestrator | ok: [testbed-node-5] 2026-04-05 01:22:23.473296 | orchestrator | 2026-04-05 01:22:23.473307 | orchestrator | TASK [Fail if count of encrypted OSDs does not match] ************************** 2026-04-05 01:22:23.473318 | orchestrator | Sunday 05 April 2026 01:22:21 +0000 (0:00:00.508) 0:00:17.379 ********** 2026-04-05 01:22:23.473340 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:22:23.473352 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:22:23.473369 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:22:23.473380 | orchestrator | 2026-04-05 01:22:23.473391 | orchestrator | TASK [Pass if count of encrypted OSDs equals count of OSDs] ******************** 2026-04-05 01:22:23.473402 | orchestrator | Sunday 05 April 2026 01:22:22 +0000 (0:00:00.501) 0:00:17.881 ********** 2026-04-05 01:22:23.473413 | orchestrator | ok: [testbed-node-3] 2026-04-05 01:22:23.473424 | orchestrator | ok: [testbed-node-4] 2026-04-05 01:22:23.473435 | orchestrator | ok: [testbed-node-5] 2026-04-05 01:22:23.473446 | orchestrator | 2026-04-05 01:22:23.473457 | orchestrator | TASK [Fail if count of unencrypted OSDs does not match] ************************ 2026-04-05 01:22:23.473467 | orchestrator | Sunday 05 April 2026 01:22:22 +0000 (0:00:00.355) 0:00:18.237 ********** 2026-04-05 01:22:23.473480 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:22:23.473500 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:22:23.473518 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:22:23.473536 | orchestrator | 2026-04-05 01:22:23.473554 | orchestrator | TASK [Pass if count of unencrypted OSDs equals count of OSDs] ****************** 2026-04-05 01:22:23.473573 | orchestrator | Sunday 05 April 2026 01:22:22 +0000 (0:00:00.323) 0:00:18.561 ********** 2026-04-05 01:22:23.473592 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:22:23.473610 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:22:23.473630 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:22:23.473649 | orchestrator | 2026-04-05 01:22:23.473683 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-04-05 01:22:31.260648 | orchestrator | Sunday 05 April 2026 01:22:23 +0000 (0:00:00.459) 0:00:19.021 ********** 2026-04-05 01:22:31.260763 | orchestrator | ok: [testbed-node-3] 2026-04-05 01:22:31.260780 | orchestrator | ok: [testbed-node-4] 2026-04-05 01:22:31.260792 | orchestrator | ok: [testbed-node-5] 2026-04-05 01:22:31.260804 | orchestrator | 2026-04-05 01:22:31.260817 | orchestrator | TASK [Get CRUSH node data of each OSD host and root node childs] *************** 2026-04-05 01:22:31.260828 | orchestrator | Sunday 05 April 2026 01:22:23 +0000 (0:00:00.507) 0:00:19.528 ********** 2026-04-05 01:22:31.260840 | orchestrator | ok: [testbed-node-3] 2026-04-05 01:22:31.260851 | orchestrator | ok: [testbed-node-4] 2026-04-05 01:22:31.260862 | orchestrator | ok: [testbed-node-5] 2026-04-05 01:22:31.260903 | orchestrator | 2026-04-05 01:22:31.260916 | orchestrator | TASK [Calculate sub test expression results] *********************************** 2026-04-05 01:22:31.260927 | orchestrator | Sunday 05 April 2026 01:22:24 +0000 (0:00:00.528) 0:00:20.057 ********** 2026-04-05 01:22:31.260938 | orchestrator | ok: [testbed-node-3] 2026-04-05 01:22:31.260949 | orchestrator | ok: [testbed-node-4] 2026-04-05 01:22:31.260961 | orchestrator | ok: [testbed-node-5] 2026-04-05 01:22:31.260972 | orchestrator | 2026-04-05 01:22:31.260983 | orchestrator | TASK [Fail test if any sub test failed] **************************************** 2026-04-05 01:22:31.260994 | orchestrator | Sunday 05 April 2026 01:22:24 +0000 (0:00:00.374) 0:00:20.431 ********** 2026-04-05 01:22:31.261006 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:22:31.261018 | orchestrator | skipping: [testbed-node-4] 2026-04-05 01:22:31.261029 | orchestrator | skipping: [testbed-node-5] 2026-04-05 01:22:31.261040 | orchestrator | 2026-04-05 01:22:31.261052 | orchestrator | TASK [Pass test if no sub test failed] ***************************************** 2026-04-05 01:22:31.261063 | orchestrator | Sunday 05 April 2026 01:22:25 +0000 (0:00:00.502) 0:00:20.934 ********** 2026-04-05 01:22:31.261075 | orchestrator | ok: [testbed-node-3] 2026-04-05 01:22:31.261086 | orchestrator | ok: [testbed-node-4] 2026-04-05 01:22:31.261097 | orchestrator | ok: [testbed-node-5] 2026-04-05 01:22:31.261108 | orchestrator | 2026-04-05 01:22:31.261120 | orchestrator | TASK [Set validation result to passed if no test failed] *********************** 2026-04-05 01:22:31.261131 | orchestrator | Sunday 05 April 2026 01:22:25 +0000 (0:00:00.326) 0:00:21.260 ********** 2026-04-05 01:22:31.261142 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-04-05 01:22:31.261174 | orchestrator | 2026-04-05 01:22:31.261189 | orchestrator | TASK [Set validation result to failed if a test failed] ************************ 2026-04-05 01:22:31.261201 | orchestrator | Sunday 05 April 2026 01:22:25 +0000 (0:00:00.283) 0:00:21.543 ********** 2026-04-05 01:22:31.261214 | orchestrator | skipping: [testbed-node-3] 2026-04-05 01:22:31.261226 | orchestrator | 2026-04-05 01:22:31.261239 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2026-04-05 01:22:31.261251 | orchestrator | Sunday 05 April 2026 01:22:26 +0000 (0:00:00.274) 0:00:21.817 ********** 2026-04-05 01:22:31.261265 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-04-05 01:22:31.261277 | orchestrator | 2026-04-05 01:22:31.261290 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2026-04-05 01:22:31.261302 | orchestrator | Sunday 05 April 2026 01:22:28 +0000 (0:00:01.794) 0:00:23.612 ********** 2026-04-05 01:22:31.261315 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-04-05 01:22:31.261328 | orchestrator | 2026-04-05 01:22:31.261341 | orchestrator | TASK [Aggregate test results step three] *************************************** 2026-04-05 01:22:31.261353 | orchestrator | Sunday 05 April 2026 01:22:28 +0000 (0:00:00.270) 0:00:23.883 ********** 2026-04-05 01:22:31.261365 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-04-05 01:22:31.261378 | orchestrator | 2026-04-05 01:22:31.261390 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-04-05 01:22:31.261403 | orchestrator | Sunday 05 April 2026 01:22:28 +0000 (0:00:00.271) 0:00:24.155 ********** 2026-04-05 01:22:31.261416 | orchestrator | 2026-04-05 01:22:31.261429 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-04-05 01:22:31.261442 | orchestrator | Sunday 05 April 2026 01:22:28 +0000 (0:00:00.081) 0:00:24.237 ********** 2026-04-05 01:22:31.261454 | orchestrator | 2026-04-05 01:22:31.261467 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-04-05 01:22:31.261479 | orchestrator | Sunday 05 April 2026 01:22:28 +0000 (0:00:00.256) 0:00:24.493 ********** 2026-04-05 01:22:31.261492 | orchestrator | 2026-04-05 01:22:31.261504 | orchestrator | RUNNING HANDLER [Write report file] ******************************************** 2026-04-05 01:22:31.261517 | orchestrator | Sunday 05 April 2026 01:22:29 +0000 (0:00:00.099) 0:00:24.592 ********** 2026-04-05 01:22:31.261530 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-04-05 01:22:31.261543 | orchestrator | 2026-04-05 01:22:31.261554 | orchestrator | TASK [Print report file information] ******************************************* 2026-04-05 01:22:31.261580 | orchestrator | Sunday 05 April 2026 01:22:30 +0000 (0:00:01.374) 0:00:25.967 ********** 2026-04-05 01:22:31.261592 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => { 2026-04-05 01:22:31.261604 | orchestrator |  "msg": [ 2026-04-05 01:22:31.261616 | orchestrator |  "Validator run completed.", 2026-04-05 01:22:31.261628 | orchestrator |  "You can find the report file here:", 2026-04-05 01:22:31.261639 | orchestrator |  "/opt/reports/validator/ceph-osds-validator-2026-04-05T01:22:05+00:00-report.json", 2026-04-05 01:22:31.261651 | orchestrator |  "on the following host:", 2026-04-05 01:22:31.261662 | orchestrator |  "testbed-manager" 2026-04-05 01:22:31.261674 | orchestrator |  ] 2026-04-05 01:22:31.261685 | orchestrator | } 2026-04-05 01:22:31.261697 | orchestrator | 2026-04-05 01:22:31.261708 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-05 01:22:31.261720 | orchestrator | testbed-node-3 : ok=35  changed=4  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-04-05 01:22:31.261732 | orchestrator | testbed-node-4 : ok=18  changed=1  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-04-05 01:22:31.261760 | orchestrator | testbed-node-5 : ok=18  changed=1  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-04-05 01:22:31.261781 | orchestrator | 2026-04-05 01:22:31.261792 | orchestrator | 2026-04-05 01:22:31.261804 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-05 01:22:31.261815 | orchestrator | Sunday 05 April 2026 01:22:30 +0000 (0:00:00.444) 0:00:26.411 ********** 2026-04-05 01:22:31.261826 | orchestrator | =============================================================================== 2026-04-05 01:22:31.261837 | orchestrator | Get ceph osd tree ------------------------------------------------------- 2.14s 2026-04-05 01:22:31.261847 | orchestrator | List ceph LVM volumes and collect data ---------------------------------- 1.82s 2026-04-05 01:22:31.261858 | orchestrator | Aggregate test results step one ----------------------------------------- 1.79s 2026-04-05 01:22:31.261910 | orchestrator | Write report file ------------------------------------------------------- 1.37s 2026-04-05 01:22:31.261923 | orchestrator | Get timestamp for report file ------------------------------------------- 1.11s 2026-04-05 01:22:31.261934 | orchestrator | Create report output directory ------------------------------------------ 0.80s 2026-04-05 01:22:31.261945 | orchestrator | Get list of ceph-osd containers on host --------------------------------- 0.75s 2026-04-05 01:22:31.261956 | orchestrator | Print report file information ------------------------------------------- 0.72s 2026-04-05 01:22:31.261967 | orchestrator | Set test result to failed if an OSD is not running ---------------------- 0.57s 2026-04-05 01:22:31.261978 | orchestrator | Get CRUSH node data of each OSD host and root node childs --------------- 0.53s 2026-04-05 01:22:31.261989 | orchestrator | Get unencrypted and encrypted OSDs -------------------------------------- 0.51s 2026-04-05 01:22:31.262000 | orchestrator | Prepare test data ------------------------------------------------------- 0.51s 2026-04-05 01:22:31.262011 | orchestrator | Fail test if any sub test failed ---------------------------------------- 0.50s 2026-04-05 01:22:31.262064 | orchestrator | Fail if count of encrypted OSDs does not match -------------------------- 0.50s 2026-04-05 01:22:31.262075 | orchestrator | Calculate OSD devices for each host ------------------------------------- 0.48s 2026-04-05 01:22:31.262086 | orchestrator | Prepare test data ------------------------------------------------------- 0.48s 2026-04-05 01:22:31.262097 | orchestrator | Prepare test data ------------------------------------------------------- 0.48s 2026-04-05 01:22:31.262108 | orchestrator | Pass if count of unencrypted OSDs equals count of OSDs ------------------ 0.46s 2026-04-05 01:22:31.262119 | orchestrator | Print report file information ------------------------------------------- 0.44s 2026-04-05 01:22:31.262130 | orchestrator | Flush handlers ---------------------------------------------------------- 0.44s 2026-04-05 01:22:31.538738 | orchestrator | + sh -c /opt/configuration/scripts/check/200-infrastructure.sh 2026-04-05 01:22:31.547513 | orchestrator | + set -e 2026-04-05 01:22:31.547595 | orchestrator | + source /opt/manager-vars.sh 2026-04-05 01:22:31.547608 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-04-05 01:22:31.547618 | orchestrator | ++ NUMBER_OF_NODES=6 2026-04-05 01:22:31.547628 | orchestrator | ++ export CEPH_VERSION=reef 2026-04-05 01:22:31.547639 | orchestrator | ++ CEPH_VERSION=reef 2026-04-05 01:22:31.547649 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-04-05 01:22:31.547660 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-04-05 01:22:31.547670 | orchestrator | ++ export MANAGER_VERSION=latest 2026-04-05 01:22:31.547680 | orchestrator | ++ MANAGER_VERSION=latest 2026-04-05 01:22:31.547690 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-04-05 01:22:31.547699 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-04-05 01:22:31.547709 | orchestrator | ++ export ARA=false 2026-04-05 01:22:31.547719 | orchestrator | ++ ARA=false 2026-04-05 01:22:31.547729 | orchestrator | ++ export DEPLOY_MODE=manager 2026-04-05 01:22:31.547739 | orchestrator | ++ DEPLOY_MODE=manager 2026-04-05 01:22:31.547748 | orchestrator | ++ export TEMPEST=true 2026-04-05 01:22:31.547758 | orchestrator | ++ TEMPEST=true 2026-04-05 01:22:31.547768 | orchestrator | ++ export IS_ZUUL=true 2026-04-05 01:22:31.547777 | orchestrator | ++ IS_ZUUL=true 2026-04-05 01:22:31.547787 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.221 2026-04-05 01:22:31.547797 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.221 2026-04-05 01:22:31.547807 | orchestrator | ++ export EXTERNAL_API=false 2026-04-05 01:22:31.547816 | orchestrator | ++ EXTERNAL_API=false 2026-04-05 01:22:31.547826 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-04-05 01:22:31.547860 | orchestrator | ++ IMAGE_USER=ubuntu 2026-04-05 01:22:31.547933 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-04-05 01:22:31.547946 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-04-05 01:22:31.547956 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-04-05 01:22:31.547966 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-04-05 01:22:31.547975 | orchestrator | + source /etc/os-release 2026-04-05 01:22:31.547985 | orchestrator | ++ PRETTY_NAME='Ubuntu 24.04.4 LTS' 2026-04-05 01:22:31.547994 | orchestrator | ++ NAME=Ubuntu 2026-04-05 01:22:31.548007 | orchestrator | ++ VERSION_ID=24.04 2026-04-05 01:22:31.548024 | orchestrator | ++ VERSION='24.04.4 LTS (Noble Numbat)' 2026-04-05 01:22:31.548039 | orchestrator | ++ VERSION_CODENAME=noble 2026-04-05 01:22:31.548055 | orchestrator | ++ ID=ubuntu 2026-04-05 01:22:31.548071 | orchestrator | ++ ID_LIKE=debian 2026-04-05 01:22:31.548089 | orchestrator | ++ HOME_URL=https://www.ubuntu.com/ 2026-04-05 01:22:31.548107 | orchestrator | ++ SUPPORT_URL=https://help.ubuntu.com/ 2026-04-05 01:22:31.548124 | orchestrator | ++ BUG_REPORT_URL=https://bugs.launchpad.net/ubuntu/ 2026-04-05 01:22:31.548156 | orchestrator | ++ PRIVACY_POLICY_URL=https://www.ubuntu.com/legal/terms-and-policies/privacy-policy 2026-04-05 01:22:31.548168 | orchestrator | ++ UBUNTU_CODENAME=noble 2026-04-05 01:22:31.548177 | orchestrator | ++ LOGO=ubuntu-logo 2026-04-05 01:22:31.548187 | orchestrator | + [[ ubuntu == \u\b\u\n\t\u ]] 2026-04-05 01:22:31.548197 | orchestrator | + packages='libmonitoring-plugin-perl libwww-perl libjson-perl monitoring-plugins-basic mysql-client' 2026-04-05 01:22:31.548209 | orchestrator | + dpkg -s libmonitoring-plugin-perl libwww-perl libjson-perl monitoring-plugins-basic mysql-client 2026-04-05 01:22:31.584675 | orchestrator | + sudo apt-get install -y libmonitoring-plugin-perl libwww-perl libjson-perl monitoring-plugins-basic mysql-client 2026-04-05 01:23:02.308551 | orchestrator | 2026-04-05 01:23:02.308650 | orchestrator | # Status of Elasticsearch 2026-04-05 01:23:02.308672 | orchestrator | 2026-04-05 01:23:02.308690 | orchestrator | + pushd /opt/configuration/contrib 2026-04-05 01:23:02.308706 | orchestrator | + echo 2026-04-05 01:23:02.308721 | orchestrator | + echo '# Status of Elasticsearch' 2026-04-05 01:23:02.308736 | orchestrator | + echo 2026-04-05 01:23:02.308751 | orchestrator | + bash nagios-plugins/check_elasticsearch -H api-int.testbed.osism.xyz -s 2026-04-05 01:23:02.499883 | orchestrator | OK - elasticsearch (kolla_logging) is running. status: green; timed_out: false; number_of_nodes: 3; number_of_data_nodes: 3; active_primary_shards: 9; active_shards: 22; relocating_shards: 0; initializing_shards: 0; delayed_unassigned_shards: 0; unassigned_shards: 0 | 'active_primary'=9 'active'=22 'relocating'=0 'init'=0 'delay_unass'=0 'unass'=0 2026-04-05 01:23:02.500011 | orchestrator | 2026-04-05 01:23:02.500024 | orchestrator | # Status of MariaDB 2026-04-05 01:23:02.500034 | orchestrator | 2026-04-05 01:23:02.500043 | orchestrator | + echo 2026-04-05 01:23:02.500052 | orchestrator | + echo '# Status of MariaDB' 2026-04-05 01:23:02.500060 | orchestrator | + echo 2026-04-05 01:23:02.500563 | orchestrator | ++ semver latest 10.0.0-0 2026-04-05 01:23:02.559192 | orchestrator | + [[ -1 -ge 0 ]] 2026-04-05 01:23:02.559275 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2026-04-05 01:23:02.559289 | orchestrator | + osism status database 2026-04-05 01:23:04.215647 | orchestrator | 2026-04-05 01:23:04 | ERROR  | Unable to get ansible vault password 2026-04-05 01:23:04.215761 | orchestrator | 2026-04-05 01:23:04 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-04-05 01:23:04.215779 | orchestrator | 2026-04-05 01:23:04 | ERROR  | Dropping encrypted entries 2026-04-05 01:23:04.252306 | orchestrator | 2026-04-05 01:23:04 | INFO  | Connecting to MariaDB at 192.168.16.9 as root_shard_0... 2026-04-05 01:23:04.265271 | orchestrator | 2026-04-05 01:23:04 | INFO  | Cluster Status: Primary 2026-04-05 01:23:04.265334 | orchestrator | 2026-04-05 01:23:04 | INFO  | Connected: ON 2026-04-05 01:23:04.265341 | orchestrator | 2026-04-05 01:23:04 | INFO  | Ready: ON 2026-04-05 01:23:04.265346 | orchestrator | 2026-04-05 01:23:04 | INFO  | Cluster Size: 3 2026-04-05 01:23:04.265352 | orchestrator | 2026-04-05 01:23:04 | INFO  | Local State: Synced 2026-04-05 01:23:04.265357 | orchestrator | 2026-04-05 01:23:04 | INFO  | Cluster State UUID: 776008e6-308a-11f1-b045-529d07800955 2026-04-05 01:23:04.265384 | orchestrator | 2026-04-05 01:23:04 | INFO  | Cluster Members: 192.168.16.11:3306,192.168.16.12:3306,192.168.16.10:3306 2026-04-05 01:23:04.265394 | orchestrator | 2026-04-05 01:23:04 | INFO  | Galera Version: 26.4.25(r7387a566) 2026-04-05 01:23:04.265402 | orchestrator | 2026-04-05 01:23:04 | INFO  | Local Node UUID: aeb2ab0e-308a-11f1-a021-d764770fbd22 2026-04-05 01:23:04.265410 | orchestrator | 2026-04-05 01:23:04 | INFO  | Flow Control Paused: 0.00% 2026-04-05 01:23:04.265418 | orchestrator | 2026-04-05 01:23:04 | INFO  | Recv Queue Avg: 0.0447761 2026-04-05 01:23:04.265425 | orchestrator | 2026-04-05 01:23:04 | INFO  | Send Queue Avg: 0.00058326 2026-04-05 01:23:04.265432 | orchestrator | 2026-04-05 01:23:04 | INFO  | Transactions: 4614 local commits, 6799 replicated, 67 received 2026-04-05 01:23:04.265440 | orchestrator | 2026-04-05 01:23:04 | INFO  | Conflicts: 0 cert failures, 0 bf aborts 2026-04-05 01:23:04.265448 | orchestrator | 2026-04-05 01:23:04 | INFO  | MariaDB Uptime: 23 minutes, 43 seconds 2026-04-05 01:23:04.265455 | orchestrator | 2026-04-05 01:23:04 | INFO  | Threads: 135 connected, 1 running 2026-04-05 01:23:04.265463 | orchestrator | 2026-04-05 01:23:04 | INFO  | Queries: 219306 total, 0 slow 2026-04-05 01:23:04.265470 | orchestrator | 2026-04-05 01:23:04 | INFO  | Aborted Connects: 158 2026-04-05 01:23:04.265478 | orchestrator | 2026-04-05 01:23:04 | INFO  | MariaDB Galera Cluster validation PASSED 2026-04-05 01:23:04.528255 | orchestrator | 2026-04-05 01:23:04.528360 | orchestrator | # Status of Prometheus 2026-04-05 01:23:04.528375 | orchestrator | 2026-04-05 01:23:04.528386 | orchestrator | + echo 2026-04-05 01:23:04.528397 | orchestrator | + echo '# Status of Prometheus' 2026-04-05 01:23:04.528407 | orchestrator | + echo 2026-04-05 01:23:04.528417 | orchestrator | + curl -s https://api-int.testbed.osism.xyz:9091/-/healthy 2026-04-05 01:23:04.593954 | orchestrator | Unauthorized 2026-04-05 01:23:04.596721 | orchestrator | + curl -s https://api-int.testbed.osism.xyz:9091/-/ready 2026-04-05 01:23:04.680691 | orchestrator | Unauthorized 2026-04-05 01:23:04.686546 | orchestrator | 2026-04-05 01:23:04.686614 | orchestrator | # Status of RabbitMQ 2026-04-05 01:23:04.686624 | orchestrator | 2026-04-05 01:23:04.686632 | orchestrator | + echo 2026-04-05 01:23:04.686640 | orchestrator | + echo '# Status of RabbitMQ' 2026-04-05 01:23:04.686647 | orchestrator | + echo 2026-04-05 01:23:04.688294 | orchestrator | ++ semver latest 10.0.0-0 2026-04-05 01:23:04.762854 | orchestrator | + [[ -1 -ge 0 ]] 2026-04-05 01:23:04.763717 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2026-04-05 01:23:04.763749 | orchestrator | + osism status messaging 2026-04-05 01:23:12.521541 | orchestrator | 2026-04-05 01:23:12 | ERROR  | Unable to get ansible vault password 2026-04-05 01:23:12.521680 | orchestrator | 2026-04-05 01:23:12 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-04-05 01:23:12.521711 | orchestrator | 2026-04-05 01:23:12 | ERROR  | Dropping encrypted entries 2026-04-05 01:23:12.555966 | orchestrator | 2026-04-05 01:23:12 | INFO  | [testbed-node-0] Connecting to RabbitMQ Management API at 192.168.16.10:15672 as openstack... 2026-04-05 01:23:12.610673 | orchestrator | 2026-04-05 01:23:12 | INFO  | [testbed-node-0] RabbitMQ Version: 3.13.7 2026-04-05 01:23:12.610771 | orchestrator | 2026-04-05 01:23:12 | INFO  | [testbed-node-0] Erlang Version: 26.2.5.15 2026-04-05 01:23:12.610808 | orchestrator | 2026-04-05 01:23:12 | INFO  | [testbed-node-0] Cluster Name: rabbit@testbed-node-0 2026-04-05 01:23:12.610820 | orchestrator | 2026-04-05 01:23:12 | INFO  | [testbed-node-0] Cluster Size: 3 2026-04-05 01:23:12.610834 | orchestrator | 2026-04-05 01:23:12 | INFO  | [testbed-node-0] Nodes: rabbit@testbed-node-0, rabbit@testbed-node-1, rabbit@testbed-node-2 2026-04-05 01:23:12.610847 | orchestrator | 2026-04-05 01:23:12 | INFO  | [testbed-node-0] Running Nodes: rabbit@testbed-node-0, rabbit@testbed-node-1, rabbit@testbed-node-2 2026-04-05 01:23:12.610882 | orchestrator | 2026-04-05 01:23:12 | INFO  | [testbed-node-0] Partitions: None (healthy) 2026-04-05 01:23:12.610940 | orchestrator | 2026-04-05 01:23:12 | INFO  | [testbed-node-0] Connections: 203, Channels: 202, Queues: 173 2026-04-05 01:23:12.610960 | orchestrator | 2026-04-05 01:23:12 | INFO  | [testbed-node-0] Messages: 235 total, 234 ready, 1 unacked 2026-04-05 01:23:12.610972 | orchestrator | 2026-04-05 01:23:12 | INFO  | [testbed-node-0] Message Rates: 7.6/s publish, 7.6/s deliver 2026-04-05 01:23:12.612989 | orchestrator | 2026-04-05 01:23:12 | INFO  | [testbed-node-0] Disk Free: 58.1 GB (limit: 0.0 GB) 2026-04-05 01:23:12.613058 | orchestrator | 2026-04-05 01:23:12 | INFO  | [testbed-node-0] Memory Used: 0.17 GB (limit: 12.54 GB) 2026-04-05 01:23:12.613081 | orchestrator | 2026-04-05 01:23:12 | INFO  | [testbed-node-0] File Descriptors: 103/1024 2026-04-05 01:23:12.613099 | orchestrator | 2026-04-05 01:23:12 | INFO  | [testbed-node-0] Sockets: 57/832 2026-04-05 01:23:12.613118 | orchestrator | 2026-04-05 01:23:12 | INFO  | [testbed-node-1] Connecting to RabbitMQ Management API at 192.168.16.11:15672 as openstack... 2026-04-05 01:23:12.681840 | orchestrator | 2026-04-05 01:23:12 | INFO  | [testbed-node-1] RabbitMQ Version: 3.13.7 2026-04-05 01:23:12.682168 | orchestrator | 2026-04-05 01:23:12 | INFO  | [testbed-node-1] Erlang Version: 26.2.5.15 2026-04-05 01:23:12.682200 | orchestrator | 2026-04-05 01:23:12 | INFO  | [testbed-node-1] Cluster Name: rabbit@testbed-node-1 2026-04-05 01:23:12.682213 | orchestrator | 2026-04-05 01:23:12 | INFO  | [testbed-node-1] Cluster Size: 3 2026-04-05 01:23:12.682240 | orchestrator | 2026-04-05 01:23:12 | INFO  | [testbed-node-1] Nodes: rabbit@testbed-node-0, rabbit@testbed-node-1, rabbit@testbed-node-2 2026-04-05 01:23:12.682455 | orchestrator | 2026-04-05 01:23:12 | INFO  | [testbed-node-1] Running Nodes: rabbit@testbed-node-0, rabbit@testbed-node-1, rabbit@testbed-node-2 2026-04-05 01:23:12.682717 | orchestrator | 2026-04-05 01:23:12 | INFO  | [testbed-node-1] Partitions: None (healthy) 2026-04-05 01:23:12.683002 | orchestrator | 2026-04-05 01:23:12 | INFO  | [testbed-node-1] Connections: 203, Channels: 202, Queues: 173 2026-04-05 01:23:12.683372 | orchestrator | 2026-04-05 01:23:12 | INFO  | [testbed-node-1] Messages: 235 total, 234 ready, 1 unacked 2026-04-05 01:23:12.684000 | orchestrator | 2026-04-05 01:23:12 | INFO  | [testbed-node-1] Message Rates: 7.6/s publish, 7.6/s deliver 2026-04-05 01:23:12.684030 | orchestrator | 2026-04-05 01:23:12 | INFO  | [testbed-node-1] Disk Free: 58.3 GB (limit: 0.0 GB) 2026-04-05 01:23:12.684594 | orchestrator | 2026-04-05 01:23:12 | INFO  | [testbed-node-1] Memory Used: 0.18 GB (limit: 12.54 GB) 2026-04-05 01:23:12.684613 | orchestrator | 2026-04-05 01:23:12 | INFO  | [testbed-node-1] File Descriptors: 120/1024 2026-04-05 01:23:12.684979 | orchestrator | 2026-04-05 01:23:12 | INFO  | [testbed-node-1] Sockets: 72/832 2026-04-05 01:23:12.685117 | orchestrator | 2026-04-05 01:23:12 | INFO  | [testbed-node-2] Connecting to RabbitMQ Management API at 192.168.16.12:15672 as openstack... 2026-04-05 01:23:12.754301 | orchestrator | 2026-04-05 01:23:12 | INFO  | [testbed-node-2] RabbitMQ Version: 3.13.7 2026-04-05 01:23:12.754394 | orchestrator | 2026-04-05 01:23:12 | INFO  | [testbed-node-2] Erlang Version: 26.2.5.15 2026-04-05 01:23:12.754408 | orchestrator | 2026-04-05 01:23:12 | INFO  | [testbed-node-2] Cluster Name: rabbit@testbed-node-2 2026-04-05 01:23:12.754420 | orchestrator | 2026-04-05 01:23:12 | INFO  | [testbed-node-2] Cluster Size: 3 2026-04-05 01:23:12.754453 | orchestrator | 2026-04-05 01:23:12 | INFO  | [testbed-node-2] Nodes: rabbit@testbed-node-0, rabbit@testbed-node-1, rabbit@testbed-node-2 2026-04-05 01:23:12.754487 | orchestrator | 2026-04-05 01:23:12 | INFO  | [testbed-node-2] Running Nodes: rabbit@testbed-node-0, rabbit@testbed-node-1, rabbit@testbed-node-2 2026-04-05 01:23:12.754499 | orchestrator | 2026-04-05 01:23:12 | INFO  | [testbed-node-2] Partitions: None (healthy) 2026-04-05 01:23:12.754510 | orchestrator | 2026-04-05 01:23:12 | INFO  | [testbed-node-2] Connections: 203, Channels: 202, Queues: 173 2026-04-05 01:23:12.754522 | orchestrator | 2026-04-05 01:23:12 | INFO  | [testbed-node-2] Messages: 235 total, 234 ready, 1 unacked 2026-04-05 01:23:12.754533 | orchestrator | 2026-04-05 01:23:12 | INFO  | [testbed-node-2] Message Rates: 7.6/s publish, 7.6/s deliver 2026-04-05 01:23:12.754543 | orchestrator | 2026-04-05 01:23:12 | INFO  | [testbed-node-2] Disk Free: 58.3 GB (limit: 0.0 GB) 2026-04-05 01:23:12.754554 | orchestrator | 2026-04-05 01:23:12 | INFO  | [testbed-node-2] Memory Used: 0.18 GB (limit: 12.54 GB) 2026-04-05 01:23:12.754565 | orchestrator | 2026-04-05 01:23:12 | INFO  | [testbed-node-2] File Descriptors: 120/1024 2026-04-05 01:23:12.754576 | orchestrator | 2026-04-05 01:23:12 | INFO  | [testbed-node-2] Sockets: 74/832 2026-04-05 01:23:12.754587 | orchestrator | 2026-04-05 01:23:12 | INFO  | RabbitMQ Cluster validation PASSED 2026-04-05 01:23:13.025945 | orchestrator | 2026-04-05 01:23:13.026089 | orchestrator | # Status of Redis 2026-04-05 01:23:13.026109 | orchestrator | 2026-04-05 01:23:13.026122 | orchestrator | + echo 2026-04-05 01:23:13.026135 | orchestrator | + echo '# Status of Redis' 2026-04-05 01:23:13.026149 | orchestrator | + echo 2026-04-05 01:23:13.026163 | orchestrator | + /usr/lib/nagios/plugins/check_tcp -H 192.168.16.10 -p 6379 -A -E -s 'AUTH QHNA1SZRlOKzLADhUd5ZDgpHfQe6dNfr3bwEdY24\r\nPING\r\nINFO replication\r\nQUIT\r\n' -e PONG -e role:master -e slave0:ip=192.168.16.1 -e,port=6379 -j 2026-04-05 01:23:13.029050 | orchestrator | TCP OK - 0.001 second response time on 192.168.16.10 port 6379|time=0.001348s;;;0.000000;10.000000 2026-04-05 01:23:13.029452 | orchestrator | + popd 2026-04-05 01:23:13.029569 | orchestrator | + echo 2026-04-05 01:23:13.030385 | orchestrator | 2026-04-05 01:23:13.030399 | orchestrator | # Create backup of MariaDB database 2026-04-05 01:23:13.030407 | orchestrator | 2026-04-05 01:23:13.030414 | orchestrator | + echo '# Create backup of MariaDB database' 2026-04-05 01:23:13.030422 | orchestrator | + echo 2026-04-05 01:23:13.030430 | orchestrator | + osism apply mariadb_backup -e mariadb_backup_type=full 2026-04-05 01:23:14.355775 | orchestrator | 2026-04-05 01:23:14 | INFO  | Prepare task for execution of mariadb_backup. 2026-04-05 01:23:14.423447 | orchestrator | 2026-04-05 01:23:14 | INFO  | Task dd4924ed-9a8c-48c0-b739-ea93536e5f61 (mariadb_backup) was prepared for execution. 2026-04-05 01:23:14.423567 | orchestrator | 2026-04-05 01:23:14 | INFO  | It takes a moment until task dd4924ed-9a8c-48c0-b739-ea93536e5f61 (mariadb_backup) has been started and output is visible here. 2026-04-05 01:24:49.663250 | orchestrator | 2026-04-05 01:24:49.663406 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-05 01:24:49.663427 | orchestrator | 2026-04-05 01:24:49.663438 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-05 01:24:49.663449 | orchestrator | Sunday 05 April 2026 01:23:17 +0000 (0:00:00.244) 0:00:00.244 ********** 2026-04-05 01:24:49.663459 | orchestrator | ok: [testbed-node-0] 2026-04-05 01:24:49.663470 | orchestrator | ok: [testbed-node-1] 2026-04-05 01:24:49.663479 | orchestrator | ok: [testbed-node-2] 2026-04-05 01:24:49.663489 | orchestrator | 2026-04-05 01:24:49.663499 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-05 01:24:49.663509 | orchestrator | Sunday 05 April 2026 01:23:18 +0000 (0:00:00.347) 0:00:00.592 ********** 2026-04-05 01:24:49.663519 | orchestrator | ok: [testbed-node-0] => (item=enable_mariadb_True) 2026-04-05 01:24:49.663529 | orchestrator | ok: [testbed-node-1] => (item=enable_mariadb_True) 2026-04-05 01:24:49.663539 | orchestrator | ok: [testbed-node-2] => (item=enable_mariadb_True) 2026-04-05 01:24:49.663572 | orchestrator | 2026-04-05 01:24:49.663584 | orchestrator | PLAY [Apply role mariadb] ****************************************************** 2026-04-05 01:24:49.663595 | orchestrator | 2026-04-05 01:24:49.663606 | orchestrator | TASK [mariadb : Group MariaDB hosts based on shards] *************************** 2026-04-05 01:24:49.663616 | orchestrator | Sunday 05 April 2026 01:23:18 +0000 (0:00:00.441) 0:00:01.034 ********** 2026-04-05 01:24:49.663627 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-04-05 01:24:49.663638 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-04-05 01:24:49.663649 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-04-05 01:24:49.663659 | orchestrator | 2026-04-05 01:24:49.663670 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-04-05 01:24:49.663681 | orchestrator | Sunday 05 April 2026 01:23:18 +0000 (0:00:00.417) 0:00:01.452 ********** 2026-04-05 01:24:49.663692 | orchestrator | included: /ansible/roles/mariadb/tasks/backup.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 01:24:49.663704 | orchestrator | 2026-04-05 01:24:49.663716 | orchestrator | TASK [mariadb : Get MariaDB container facts] *********************************** 2026-04-05 01:24:49.663727 | orchestrator | Sunday 05 April 2026 01:23:19 +0000 (0:00:00.731) 0:00:02.183 ********** 2026-04-05 01:24:49.663737 | orchestrator | ok: [testbed-node-1] 2026-04-05 01:24:49.663748 | orchestrator | ok: [testbed-node-2] 2026-04-05 01:24:49.663759 | orchestrator | ok: [testbed-node-0] 2026-04-05 01:24:49.663773 | orchestrator | 2026-04-05 01:24:49.663786 | orchestrator | TASK [mariadb : Taking full database backup via Mariabackup] ******************* 2026-04-05 01:24:49.663799 | orchestrator | Sunday 05 April 2026 01:23:22 +0000 (0:00:03.269) 0:00:05.452 ********** 2026-04-05 01:24:49.663812 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:24:49.663826 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:24:49.663852 | orchestrator | changed: [testbed-node-0] 2026-04-05 01:24:49.663865 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_restart 2026-04-05 01:24:49.663878 | orchestrator | 2026-04-05 01:24:49.663891 | orchestrator | PLAY [Restart mariadb services] ************************************************ 2026-04-05 01:24:49.663904 | orchestrator | skipping: no hosts matched 2026-04-05 01:24:49.663917 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_start 2026-04-05 01:24:49.663929 | orchestrator | 2026-04-05 01:24:49.663941 | orchestrator | PLAY [Start mariadb services] ************************************************** 2026-04-05 01:24:49.663979 | orchestrator | skipping: no hosts matched 2026-04-05 01:24:49.663993 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2026-04-05 01:24:49.664006 | orchestrator | mariadb_bootstrap_restart 2026-04-05 01:24:49.664019 | orchestrator | 2026-04-05 01:24:49.664032 | orchestrator | PLAY [Restart bootstrap mariadb service] *************************************** 2026-04-05 01:24:49.664044 | orchestrator | skipping: no hosts matched 2026-04-05 01:24:49.664057 | orchestrator | 2026-04-05 01:24:49.664071 | orchestrator | PLAY [Apply mariadb post-configuration] **************************************** 2026-04-05 01:24:49.664083 | orchestrator | 2026-04-05 01:24:49.664097 | orchestrator | TASK [Include mariadb post-deploy.yml] ***************************************** 2026-04-05 01:24:49.664110 | orchestrator | Sunday 05 April 2026 01:24:48 +0000 (0:01:25.933) 0:01:31.386 ********** 2026-04-05 01:24:49.664121 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:24:49.664132 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:24:49.664142 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:24:49.664153 | orchestrator | 2026-04-05 01:24:49.664164 | orchestrator | TASK [Include mariadb post-upgrade.yml] **************************************** 2026-04-05 01:24:49.664174 | orchestrator | Sunday 05 April 2026 01:24:49 +0000 (0:00:00.306) 0:01:31.692 ********** 2026-04-05 01:24:49.664185 | orchestrator | skipping: [testbed-node-0] 2026-04-05 01:24:49.664196 | orchestrator | skipping: [testbed-node-1] 2026-04-05 01:24:49.664206 | orchestrator | skipping: [testbed-node-2] 2026-04-05 01:24:49.664217 | orchestrator | 2026-04-05 01:24:49.664228 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-05 01:24:49.664250 | orchestrator | testbed-node-0 : ok=6  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-05 01:24:49.664262 | orchestrator | testbed-node-1 : ok=4  changed=0 unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-04-05 01:24:49.664273 | orchestrator | testbed-node-2 : ok=4  changed=0 unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-04-05 01:24:49.664284 | orchestrator | 2026-04-05 01:24:49.664295 | orchestrator | 2026-04-05 01:24:49.664305 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-05 01:24:49.664316 | orchestrator | Sunday 05 April 2026 01:24:49 +0000 (0:00:00.221) 0:01:31.914 ********** 2026-04-05 01:24:49.664327 | orchestrator | =============================================================================== 2026-04-05 01:24:49.664338 | orchestrator | mariadb : Taking full database backup via Mariabackup ------------------ 85.93s 2026-04-05 01:24:49.664367 | orchestrator | mariadb : Get MariaDB container facts ----------------------------------- 3.27s 2026-04-05 01:24:49.664379 | orchestrator | mariadb : include_tasks ------------------------------------------------- 0.73s 2026-04-05 01:24:49.664390 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.44s 2026-04-05 01:24:49.664402 | orchestrator | mariadb : Group MariaDB hosts based on shards --------------------------- 0.42s 2026-04-05 01:24:49.664413 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.35s 2026-04-05 01:24:49.664423 | orchestrator | Include mariadb post-deploy.yml ----------------------------------------- 0.31s 2026-04-05 01:24:49.664434 | orchestrator | Include mariadb post-upgrade.yml ---------------------------------------- 0.22s 2026-04-05 01:24:49.872141 | orchestrator | + sh -c /opt/configuration/scripts/check/300-openstack.sh 2026-04-05 01:24:49.877459 | orchestrator | + set -e 2026-04-05 01:24:49.877544 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-04-05 01:24:49.877558 | orchestrator | ++ export INTERACTIVE=false 2026-04-05 01:24:49.877571 | orchestrator | ++ INTERACTIVE=false 2026-04-05 01:24:49.877582 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-04-05 01:24:49.877593 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-04-05 01:24:49.877604 | orchestrator | + source /opt/configuration/scripts/manager-version.sh 2026-04-05 01:24:49.878542 | orchestrator | +++ awk '-F: ' '/^manager_version:/ { print $2 }' /opt/configuration/environments/manager/configuration.yml 2026-04-05 01:24:49.885477 | orchestrator | 2026-04-05 01:24:49.885533 | orchestrator | # OpenStack endpoints 2026-04-05 01:24:49.885556 | orchestrator | 2026-04-05 01:24:49.885576 | orchestrator | ++ export MANAGER_VERSION=latest 2026-04-05 01:24:49.885595 | orchestrator | ++ MANAGER_VERSION=latest 2026-04-05 01:24:49.885614 | orchestrator | + export OS_CLOUD=admin 2026-04-05 01:24:49.885633 | orchestrator | + OS_CLOUD=admin 2026-04-05 01:24:49.885651 | orchestrator | + echo 2026-04-05 01:24:49.885670 | orchestrator | + echo '# OpenStack endpoints' 2026-04-05 01:24:49.885688 | orchestrator | + echo 2026-04-05 01:24:49.885706 | orchestrator | + openstack endpoint list 2026-04-05 01:24:53.252656 | orchestrator | +----------------------------------+-----------+--------------+-----------------+---------+-----------+---------------------------------------------------------------------+ 2026-04-05 01:24:53.252788 | orchestrator | | ID | Region | Service Name | Service Type | Enabled | Interface | URL | 2026-04-05 01:24:53.252809 | orchestrator | +----------------------------------+-----------+--------------+-----------------+---------+-----------+---------------------------------------------------------------------+ 2026-04-05 01:24:53.252826 | orchestrator | | 03aa85f89f7747da851cdfaf5a76fe2e | RegionOne | placement | placement | True | internal | https://api-int.testbed.osism.xyz:8780 | 2026-04-05 01:24:53.252861 | orchestrator | | 0c7bd8e638d949239e1c6337e1075962 | RegionOne | cinderv3 | volumev3 | True | public | https://api.testbed.osism.xyz:8776/v3/%(tenant_id)s | 2026-04-05 01:24:53.252888 | orchestrator | | 158c04ded90948cdba26cc6da2a476f0 | RegionOne | designate | dns | True | public | https://api.testbed.osism.xyz:9001 | 2026-04-05 01:24:53.252929 | orchestrator | | 24092c3bb34549c482bb5580635b3d8f | RegionOne | glance | image | True | public | https://api.testbed.osism.xyz:9292 | 2026-04-05 01:24:53.252946 | orchestrator | | 3792bf67f2744722bc4113db111040f0 | RegionOne | neutron | network | True | internal | https://api-int.testbed.osism.xyz:9696 | 2026-04-05 01:24:53.252986 | orchestrator | | 478b7c39e6414c3e9421732f613f3f09 | RegionOne | cinderv3 | volumev3 | True | internal | https://api-int.testbed.osism.xyz:8776/v3/%(tenant_id)s | 2026-04-05 01:24:53.253002 | orchestrator | | 4fda02bf83c5438b90414c50b5d14e22 | RegionOne | magnum | container-infra | True | internal | https://api-int.testbed.osism.xyz:9511/v1 | 2026-04-05 01:24:53.253017 | orchestrator | | 5a3291be550844a4945fd638d6ed802e | RegionOne | magnum | container-infra | True | public | https://api.testbed.osism.xyz:9511/v1 | 2026-04-05 01:24:53.253033 | orchestrator | | 62c6dab0c6c6474b93585736cb271a31 | RegionOne | placement | placement | True | public | https://api.testbed.osism.xyz:8780 | 2026-04-05 01:24:53.253048 | orchestrator | | 6dd439f53f71446698d44ec2493fc807 | RegionOne | keystone | identity | True | public | https://api.testbed.osism.xyz:5000 | 2026-04-05 01:24:53.253064 | orchestrator | | 6ee864f7cead4bca883aee771bb9cb89 | RegionOne | glance | image | True | internal | https://api-int.testbed.osism.xyz:9292 | 2026-04-05 01:24:53.253078 | orchestrator | | 79e413b27f9d440986b4a5a105e6b2b2 | RegionOne | octavia | load-balancer | True | internal | https://api-int.testbed.osism.xyz:9876 | 2026-04-05 01:24:53.253094 | orchestrator | | 85c5d8b714fd4a33bdf289c67a64ff11 | RegionOne | nova | compute | True | public | https://api.testbed.osism.xyz:8774/v2.1 | 2026-04-05 01:24:53.253110 | orchestrator | | 9ee8d51ed4e94532b4cc7ad097e75c35 | RegionOne | octavia | load-balancer | True | public | https://api.testbed.osism.xyz:9876 | 2026-04-05 01:24:53.253124 | orchestrator | | a92fcf13286046e2bdc776fc4ee57cf2 | RegionOne | nova | compute | True | internal | https://api-int.testbed.osism.xyz:8774/v2.1 | 2026-04-05 01:24:53.253140 | orchestrator | | ab495aec7ea44bf18cf39c0a8fbdc25b | RegionOne | barbican | key-manager | True | public | https://api.testbed.osism.xyz:9311 | 2026-04-05 01:24:53.253155 | orchestrator | | af9a17cbad844512877dcc3ee40dd979 | RegionOne | designate | dns | True | internal | https://api-int.testbed.osism.xyz:9001 | 2026-04-05 01:24:53.253171 | orchestrator | | b1159476d7da429c9ab69e092ae7d30e | RegionOne | swift | object-store | True | internal | https://api-int.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s | 2026-04-05 01:24:53.253186 | orchestrator | | c00cadff759a4341adf2d38217f09b46 | RegionOne | neutron | network | True | public | https://api.testbed.osism.xyz:9696 | 2026-04-05 01:24:53.253202 | orchestrator | | c8bcd4c271164535b51094d4d2279486 | RegionOne | barbican | key-manager | True | internal | https://api-int.testbed.osism.xyz:9311 | 2026-04-05 01:24:53.253240 | orchestrator | | ccadb9efb0a144abaa7b8e6f66dd30c5 | RegionOne | keystone | identity | True | internal | https://api-int.testbed.osism.xyz:5000 | 2026-04-05 01:24:53.253257 | orchestrator | | d47f9d71f211407abaa9055d0f995722 | RegionOne | swift | object-store | True | public | https://api.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s | 2026-04-05 01:24:53.253272 | orchestrator | +----------------------------------+-----------+--------------+-----------------+---------+-----------+---------------------------------------------------------------------+ 2026-04-05 01:24:53.533107 | orchestrator | 2026-04-05 01:24:53.533208 | orchestrator | # Cinder 2026-04-05 01:24:53.533225 | orchestrator | 2026-04-05 01:24:53.533236 | orchestrator | + echo 2026-04-05 01:24:53.533248 | orchestrator | + echo '# Cinder' 2026-04-05 01:24:53.533260 | orchestrator | + echo 2026-04-05 01:24:53.533271 | orchestrator | + openstack volume service list 2026-04-05 01:24:56.830268 | orchestrator | +------------------+----------------------------+----------+---------+-------+----------------------------+ 2026-04-05 01:24:56.830407 | orchestrator | | Binary | Host | Zone | Status | State | Updated At | 2026-04-05 01:24:56.830433 | orchestrator | +------------------+----------------------------+----------+---------+-------+----------------------------+ 2026-04-05 01:24:56.830453 | orchestrator | | cinder-scheduler | testbed-node-0 | internal | enabled | up | 2026-04-05T01:24:55.000000 | 2026-04-05 01:24:56.830472 | orchestrator | | cinder-scheduler | testbed-node-1 | internal | enabled | up | 2026-04-05T01:24:55.000000 | 2026-04-05 01:24:56.830490 | orchestrator | | cinder-scheduler | testbed-node-2 | internal | enabled | up | 2026-04-05T01:24:55.000000 | 2026-04-05 01:24:56.830509 | orchestrator | | cinder-volume | testbed-node-0@rbd-volumes | nova | enabled | up | 2026-04-05T01:24:55.000000 | 2026-04-05 01:24:56.830528 | orchestrator | | cinder-volume | testbed-node-1@rbd-volumes | nova | enabled | up | 2026-04-05T01:24:52.000000 | 2026-04-05 01:24:56.830546 | orchestrator | | cinder-volume | testbed-node-2@rbd-volumes | nova | enabled | up | 2026-04-05T01:24:53.000000 | 2026-04-05 01:24:56.830564 | orchestrator | | cinder-backup | testbed-node-0 | nova | enabled | up | 2026-04-05T01:24:50.000000 | 2026-04-05 01:24:56.830583 | orchestrator | | cinder-backup | testbed-node-1 | nova | enabled | up | 2026-04-05T01:24:53.000000 | 2026-04-05 01:24:56.830603 | orchestrator | | cinder-backup | testbed-node-2 | nova | enabled | up | 2026-04-05T01:24:53.000000 | 2026-04-05 01:24:56.830621 | orchestrator | +------------------+----------------------------+----------+---------+-------+----------------------------+ 2026-04-05 01:24:57.113478 | orchestrator | 2026-04-05 01:24:57.113576 | orchestrator | # Neutron 2026-04-05 01:24:57.113592 | orchestrator | 2026-04-05 01:24:57.113604 | orchestrator | + echo 2026-04-05 01:24:57.113616 | orchestrator | + echo '# Neutron' 2026-04-05 01:24:57.113628 | orchestrator | + echo 2026-04-05 01:24:57.113640 | orchestrator | + openstack network agent list 2026-04-05 01:24:59.971674 | orchestrator | +--------------------------------------+------------------------------+----------------+-------------------+-------+-------+----------------------------+ 2026-04-05 01:24:59.971801 | orchestrator | | ID | Agent Type | Host | Availability Zone | Alive | State | Binary | 2026-04-05 01:24:59.971824 | orchestrator | +--------------------------------------+------------------------------+----------------+-------------------+-------+-------+----------------------------+ 2026-04-05 01:24:59.971841 | orchestrator | | testbed-node-5 | OVN Controller agent | testbed-node-5 | | :-) | UP | ovn-controller | 2026-04-05 01:24:59.971858 | orchestrator | | testbed-node-4 | OVN Controller agent | testbed-node-4 | | :-) | UP | ovn-controller | 2026-04-05 01:24:59.971875 | orchestrator | | testbed-node-1 | OVN Controller Gateway agent | testbed-node-1 | nova | :-) | UP | ovn-controller | 2026-04-05 01:24:59.971892 | orchestrator | | testbed-node-3 | OVN Controller agent | testbed-node-3 | | :-) | UP | ovn-controller | 2026-04-05 01:24:59.971909 | orchestrator | | testbed-node-2 | OVN Controller Gateway agent | testbed-node-2 | nova | :-) | UP | ovn-controller | 2026-04-05 01:24:59.971925 | orchestrator | | testbed-node-0 | OVN Controller Gateway agent | testbed-node-0 | nova | :-) | UP | ovn-controller | 2026-04-05 01:24:59.972101 | orchestrator | | 36b9d21c-9928-5c0a-9b27-73ac7a3e770c | OVN Metadata agent | testbed-node-5 | | :-) | UP | neutron-ovn-metadata-agent | 2026-04-05 01:24:59.972125 | orchestrator | | e645415a-98f5-5758-8cd1-c47af282b5c0 | OVN Metadata agent | testbed-node-3 | | :-) | UP | neutron-ovn-metadata-agent | 2026-04-05 01:24:59.972141 | orchestrator | | 4939696e-6092-5a33-bb73-b850064684df | OVN Metadata agent | testbed-node-4 | | :-) | UP | neutron-ovn-metadata-agent | 2026-04-05 01:24:59.972158 | orchestrator | +--------------------------------------+------------------------------+----------------+-------------------+-------+-------+----------------------------+ 2026-04-05 01:25:00.288059 | orchestrator | + openstack network service provider list 2026-04-05 01:25:02.854860 | orchestrator | +---------------+------+---------+ 2026-04-05 01:25:02.855052 | orchestrator | | Service Type | Name | Default | 2026-04-05 01:25:02.855075 | orchestrator | +---------------+------+---------+ 2026-04-05 01:25:02.855090 | orchestrator | | L3_ROUTER_NAT | ovn | True | 2026-04-05 01:25:02.855104 | orchestrator | +---------------+------+---------+ 2026-04-05 01:25:03.163675 | orchestrator | 2026-04-05 01:25:03.163757 | orchestrator | # Nova 2026-04-05 01:25:03.163769 | orchestrator | 2026-04-05 01:25:03.163778 | orchestrator | + echo 2026-04-05 01:25:03.163787 | orchestrator | + echo '# Nova' 2026-04-05 01:25:03.163797 | orchestrator | + echo 2026-04-05 01:25:03.163806 | orchestrator | + openstack compute service list 2026-04-05 01:25:06.126123 | orchestrator | +--------------------------------------+----------------+----------------+----------+---------+-------+----------------------------+ 2026-04-05 01:25:06.126262 | orchestrator | | ID | Binary | Host | Zone | Status | State | Updated At | 2026-04-05 01:25:06.126290 | orchestrator | +--------------------------------------+----------------+----------------+----------+---------+-------+----------------------------+ 2026-04-05 01:25:06.126311 | orchestrator | | e257572a-8eed-495a-8806-c7abc7dc8108 | nova-scheduler | testbed-node-0 | internal | enabled | up | 2026-04-05T01:25:03.000000 | 2026-04-05 01:25:06.126352 | orchestrator | | eb01f695-a571-48e4-8481-9756fc3bc5d3 | nova-scheduler | testbed-node-1 | internal | enabled | up | 2026-04-05T01:25:03.000000 | 2026-04-05 01:25:06.126365 | orchestrator | | ec8233ed-09a9-4f2e-8545-830df67362f3 | nova-scheduler | testbed-node-2 | internal | enabled | up | 2026-04-05T01:24:58.000000 | 2026-04-05 01:25:06.126376 | orchestrator | | d705b165-5a21-4046-8803-e76f049e761d | nova-conductor | testbed-node-2 | internal | enabled | up | 2026-04-05T01:25:00.000000 | 2026-04-05 01:25:06.126387 | orchestrator | | 739f6f56-dbee-4333-abe5-221939086345 | nova-conductor | testbed-node-0 | internal | enabled | up | 2026-04-05T01:25:00.000000 | 2026-04-05 01:25:06.126398 | orchestrator | | 857c508d-65b8-4d39-90b7-068f45c849dc | nova-conductor | testbed-node-1 | internal | enabled | up | 2026-04-05T01:25:04.000000 | 2026-04-05 01:25:06.126409 | orchestrator | | 60166c22-29b4-4aa6-b9a7-5f38fcc210c8 | nova-compute | testbed-node-3 | nova | enabled | up | 2026-04-05T01:25:03.000000 | 2026-04-05 01:25:06.126420 | orchestrator | | 129311c8-b1f8-49a1-8f1b-598d3b978f71 | nova-compute | testbed-node-4 | nova | enabled | up | 2026-04-05T01:25:05.000000 | 2026-04-05 01:25:06.126432 | orchestrator | | cf45f20e-a93d-4969-a313-87778bcb1f2b | nova-compute | testbed-node-5 | nova | enabled | up | 2026-04-05T01:24:56.000000 | 2026-04-05 01:25:06.126451 | orchestrator | +--------------------------------------+----------------+----------------+----------+---------+-------+----------------------------+ 2026-04-05 01:25:06.421589 | orchestrator | + openstack hypervisor list 2026-04-05 01:25:09.212473 | orchestrator | +--------------------------------------+---------------------+-----------------+---------------+-------+ 2026-04-05 01:25:09.212599 | orchestrator | | ID | Hypervisor Hostname | Hypervisor Type | Host IP | State | 2026-04-05 01:25:09.212616 | orchestrator | +--------------------------------------+---------------------+-----------------+---------------+-------+ 2026-04-05 01:25:09.212628 | orchestrator | | 4e2ceb21-bce0-4cf0-ba9c-23d2afea0b94 | testbed-node-3 | QEMU | 192.168.16.13 | up | 2026-04-05 01:25:09.212667 | orchestrator | | 1d03f3e6-dd03-4db9-9ddc-787ffe9099b8 | testbed-node-4 | QEMU | 192.168.16.14 | up | 2026-04-05 01:25:09.212680 | orchestrator | | fd9380ec-8c09-4f9b-8296-825e7c5a1ee4 | testbed-node-5 | QEMU | 192.168.16.15 | up | 2026-04-05 01:25:09.212691 | orchestrator | +--------------------------------------+---------------------+-----------------+---------------+-------+ 2026-04-05 01:25:09.529908 | orchestrator | 2026-04-05 01:25:09.530113 | orchestrator | # Run OpenStack test play 2026-04-05 01:25:09.530137 | orchestrator | 2026-04-05 01:25:09.530150 | orchestrator | + echo 2026-04-05 01:25:09.530162 | orchestrator | + echo '# Run OpenStack test play' 2026-04-05 01:25:09.530175 | orchestrator | + echo 2026-04-05 01:25:09.530186 | orchestrator | + osism apply --environment openstack test 2026-04-05 01:25:10.966633 | orchestrator | 2026-04-05 01:25:10 | INFO  | Trying to run play test in environment openstack 2026-04-05 01:25:21.000336 | orchestrator | 2026-04-05 01:25:20 | INFO  | Prepare task for execution of test. 2026-04-05 01:25:21.110276 | orchestrator | 2026-04-05 01:25:21 | INFO  | Task c6105223-6c42-47ec-aedc-b2e499161cdc (test) was prepared for execution. 2026-04-05 01:25:21.110375 | orchestrator | 2026-04-05 01:25:21 | INFO  | It takes a moment until task c6105223-6c42-47ec-aedc-b2e499161cdc (test) has been started and output is visible here. 2026-04-05 01:28:44.178508 | orchestrator | 2026-04-05 01:28:44.178635 | orchestrator | PLAY [Create test project] ***************************************************** 2026-04-05 01:28:44.178655 | orchestrator | 2026-04-05 01:28:44.178668 | orchestrator | TASK [Create test domain] ****************************************************** 2026-04-05 01:28:44.178680 | orchestrator | Sunday 05 April 2026 01:25:24 +0000 (0:00:00.125) 0:00:00.125 ********** 2026-04-05 01:28:44.178691 | orchestrator | changed: [localhost] 2026-04-05 01:28:44.178703 | orchestrator | 2026-04-05 01:28:44.178714 | orchestrator | TASK [Create test-admin user] ************************************************** 2026-04-05 01:28:44.178725 | orchestrator | Sunday 05 April 2026 01:25:28 +0000 (0:00:03.980) 0:00:04.106 ********** 2026-04-05 01:28:44.178736 | orchestrator | changed: [localhost] 2026-04-05 01:28:44.178747 | orchestrator | 2026-04-05 01:28:44.178758 | orchestrator | TASK [Add manager role to user test-admin] ************************************* 2026-04-05 01:28:44.178769 | orchestrator | Sunday 05 April 2026 01:25:33 +0000 (0:00:04.646) 0:00:08.752 ********** 2026-04-05 01:28:44.178780 | orchestrator | changed: [localhost] 2026-04-05 01:28:44.178790 | orchestrator | 2026-04-05 01:28:44.178801 | orchestrator | TASK [Create test project] ***************************************************** 2026-04-05 01:28:44.178812 | orchestrator | Sunday 05 April 2026 01:25:40 +0000 (0:00:06.984) 0:00:15.736 ********** 2026-04-05 01:28:44.178823 | orchestrator | changed: [localhost] 2026-04-05 01:28:44.178834 | orchestrator | 2026-04-05 01:28:44.178845 | orchestrator | TASK [Create test user] ******************************************************** 2026-04-05 01:28:44.178855 | orchestrator | Sunday 05 April 2026 01:25:44 +0000 (0:00:04.273) 0:00:20.009 ********** 2026-04-05 01:28:44.178866 | orchestrator | changed: [localhost] 2026-04-05 01:28:44.178877 | orchestrator | 2026-04-05 01:28:44.178888 | orchestrator | TASK [Add member roles to user test] ******************************************* 2026-04-05 01:28:44.178899 | orchestrator | Sunday 05 April 2026 01:25:48 +0000 (0:00:04.565) 0:00:24.575 ********** 2026-04-05 01:28:44.178910 | orchestrator | changed: [localhost] => (item=load-balancer_member) 2026-04-05 01:28:44.178921 | orchestrator | changed: [localhost] => (item=member) 2026-04-05 01:28:44.178935 | orchestrator | changed: [localhost] => (item=creator) 2026-04-05 01:28:44.178948 | orchestrator | 2026-04-05 01:28:44.178961 | orchestrator | TASK [Create test server group] ************************************************ 2026-04-05 01:28:44.178973 | orchestrator | Sunday 05 April 2026 01:26:01 +0000 (0:00:12.495) 0:00:37.070 ********** 2026-04-05 01:28:44.179003 | orchestrator | changed: [localhost] 2026-04-05 01:28:44.179016 | orchestrator | 2026-04-05 01:28:44.179029 | orchestrator | TASK [Create ssh security group] *********************************************** 2026-04-05 01:28:44.179042 | orchestrator | Sunday 05 April 2026 01:26:05 +0000 (0:00:04.652) 0:00:41.723 ********** 2026-04-05 01:28:44.179076 | orchestrator | changed: [localhost] 2026-04-05 01:28:44.179089 | orchestrator | 2026-04-05 01:28:44.179103 | orchestrator | TASK [Add rule to ssh security group] ****************************************** 2026-04-05 01:28:44.179115 | orchestrator | Sunday 05 April 2026 01:26:11 +0000 (0:00:05.431) 0:00:47.155 ********** 2026-04-05 01:28:44.179127 | orchestrator | changed: [localhost] 2026-04-05 01:28:44.179138 | orchestrator | 2026-04-05 01:28:44.179173 | orchestrator | TASK [Create icmp security group] ********************************************** 2026-04-05 01:28:44.179185 | orchestrator | Sunday 05 April 2026 01:26:15 +0000 (0:00:04.552) 0:00:51.707 ********** 2026-04-05 01:28:44.179196 | orchestrator | changed: [localhost] 2026-04-05 01:28:44.179207 | orchestrator | 2026-04-05 01:28:44.179218 | orchestrator | TASK [Add rule to icmp security group] ***************************************** 2026-04-05 01:28:44.179229 | orchestrator | Sunday 05 April 2026 01:26:20 +0000 (0:00:04.125) 0:00:55.833 ********** 2026-04-05 01:28:44.179239 | orchestrator | changed: [localhost] 2026-04-05 01:28:44.179250 | orchestrator | 2026-04-05 01:28:44.179261 | orchestrator | TASK [Create test keypair] ***************************************************** 2026-04-05 01:28:44.179272 | orchestrator | Sunday 05 April 2026 01:26:24 +0000 (0:00:04.348) 0:01:00.182 ********** 2026-04-05 01:28:44.179282 | orchestrator | changed: [localhost] 2026-04-05 01:28:44.179293 | orchestrator | 2026-04-05 01:28:44.179304 | orchestrator | TASK [Create test networks] **************************************************** 2026-04-05 01:28:44.179315 | orchestrator | Sunday 05 April 2026 01:26:28 +0000 (0:00:04.395) 0:01:04.577 ********** 2026-04-05 01:28:44.179325 | orchestrator | changed: [localhost] => (item={'name': 'test-1'}) 2026-04-05 01:28:44.179336 | orchestrator | changed: [localhost] => (item={'name': 'test-2'}) 2026-04-05 01:28:44.179347 | orchestrator | changed: [localhost] => (item={'name': 'test-3'}) 2026-04-05 01:28:44.179358 | orchestrator | 2026-04-05 01:28:44.179368 | orchestrator | TASK [Create test subnets] ***************************************************** 2026-04-05 01:28:44.179379 | orchestrator | Sunday 05 April 2026 01:26:44 +0000 (0:00:15.349) 0:01:19.927 ********** 2026-04-05 01:28:44.179391 | orchestrator | changed: [localhost] => (item={'name': 'test-1', 'subnet': 'subnet-test-1', 'cidr': '192.168.200.0/24'}) 2026-04-05 01:28:44.179402 | orchestrator | changed: [localhost] => (item={'name': 'test-2', 'subnet': 'subnet-test-2', 'cidr': '192.168.201.0/24'}) 2026-04-05 01:28:44.179413 | orchestrator | changed: [localhost] => (item={'name': 'test-3', 'subnet': 'subnet-test-3', 'cidr': '192.168.202.0/24'}) 2026-04-05 01:28:44.179424 | orchestrator | 2026-04-05 01:28:44.179435 | orchestrator | TASK [Create test routers] ***************************************************** 2026-04-05 01:28:44.179446 | orchestrator | Sunday 05 April 2026 01:27:01 +0000 (0:00:17.017) 0:01:36.945 ********** 2026-04-05 01:28:44.179457 | orchestrator | changed: [localhost] => (item={'router': 'router-test-1', 'subnet': 'subnet-test-1'}) 2026-04-05 01:28:44.179468 | orchestrator | changed: [localhost] => (item={'router': 'router-test-2', 'subnet': 'subnet-test-2'}) 2026-04-05 01:28:44.179479 | orchestrator | changed: [localhost] => (item={'router': 'router-test-3', 'subnet': 'subnet-test-3'}) 2026-04-05 01:28:44.179490 | orchestrator | 2026-04-05 01:28:44.179500 | orchestrator | PLAY [Manage test instances and volumes] *************************************** 2026-04-05 01:28:44.179511 | orchestrator | 2026-04-05 01:28:44.179522 | orchestrator | TASK [Get test server group] *************************************************** 2026-04-05 01:28:44.179551 | orchestrator | Sunday 05 April 2026 01:27:35 +0000 (0:00:34.144) 0:02:11.089 ********** 2026-04-05 01:28:44.179563 | orchestrator | ok: [localhost] 2026-04-05 01:28:44.179575 | orchestrator | 2026-04-05 01:28:44.179587 | orchestrator | TASK [Detach test volume] ****************************************************** 2026-04-05 01:28:44.179597 | orchestrator | Sunday 05 April 2026 01:27:39 +0000 (0:00:03.877) 0:02:14.967 ********** 2026-04-05 01:28:44.179608 | orchestrator | skipping: [localhost] 2026-04-05 01:28:44.179619 | orchestrator | 2026-04-05 01:28:44.179630 | orchestrator | TASK [Delete test volume] ****************************************************** 2026-04-05 01:28:44.179641 | orchestrator | Sunday 05 April 2026 01:27:39 +0000 (0:00:00.048) 0:02:15.015 ********** 2026-04-05 01:28:44.179661 | orchestrator | skipping: [localhost] 2026-04-05 01:28:44.179672 | orchestrator | 2026-04-05 01:28:44.179683 | orchestrator | TASK [Delete test instances] *************************************************** 2026-04-05 01:28:44.179694 | orchestrator | Sunday 05 April 2026 01:27:39 +0000 (0:00:00.052) 0:02:15.068 ********** 2026-04-05 01:28:44.179705 | orchestrator | skipping: [localhost] => (item={'name': 'test-4', 'network': 'test-3'})  2026-04-05 01:28:44.179716 | orchestrator | skipping: [localhost] => (item={'name': 'test-3', 'network': 'test-2'})  2026-04-05 01:28:44.179727 | orchestrator | skipping: [localhost] => (item={'name': 'test-2', 'network': 'test-2'})  2026-04-05 01:28:44.179738 | orchestrator | skipping: [localhost] => (item={'name': 'test-1', 'network': 'test-1'})  2026-04-05 01:28:44.179749 | orchestrator | skipping: [localhost] => (item={'name': 'test', 'network': 'test-1'})  2026-04-05 01:28:44.179760 | orchestrator | skipping: [localhost] 2026-04-05 01:28:44.179770 | orchestrator | 2026-04-05 01:28:44.179781 | orchestrator | TASK [Wait for instance deletion to complete] ********************************** 2026-04-05 01:28:44.179792 | orchestrator | Sunday 05 April 2026 01:27:39 +0000 (0:00:00.185) 0:02:15.254 ********** 2026-04-05 01:28:44.179803 | orchestrator | skipping: [localhost] 2026-04-05 01:28:44.179814 | orchestrator | 2026-04-05 01:28:44.179825 | orchestrator | TASK [Create test instances] *************************************************** 2026-04-05 01:28:44.179836 | orchestrator | Sunday 05 April 2026 01:27:39 +0000 (0:00:00.159) 0:02:15.413 ********** 2026-04-05 01:28:44.179847 | orchestrator | changed: [localhost] => (item={'name': 'test', 'network': 'test-1'}) 2026-04-05 01:28:44.179857 | orchestrator | changed: [localhost] => (item={'name': 'test-1', 'network': 'test-1'}) 2026-04-05 01:28:44.179874 | orchestrator | changed: [localhost] => (item={'name': 'test-2', 'network': 'test-2'}) 2026-04-05 01:28:44.179885 | orchestrator | changed: [localhost] => (item={'name': 'test-3', 'network': 'test-2'}) 2026-04-05 01:28:44.179896 | orchestrator | changed: [localhost] => (item={'name': 'test-4', 'network': 'test-3'}) 2026-04-05 01:28:44.179907 | orchestrator | 2026-04-05 01:28:44.179918 | orchestrator | TASK [Wait for instance creation to complete] ********************************** 2026-04-05 01:28:44.179929 | orchestrator | Sunday 05 April 2026 01:27:45 +0000 (0:00:05.351) 0:02:20.764 ********** 2026-04-05 01:28:44.179940 | orchestrator | FAILED - RETRYING: [localhost]: Wait for instance creation to complete (60 retries left). 2026-04-05 01:28:44.179951 | orchestrator | FAILED - RETRYING: [localhost]: Wait for instance creation to complete (59 retries left). 2026-04-05 01:28:44.179962 | orchestrator | FAILED - RETRYING: [localhost]: Wait for instance creation to complete (58 retries left). 2026-04-05 01:28:44.179973 | orchestrator | FAILED - RETRYING: [localhost]: Wait for instance creation to complete (57 retries left). 2026-04-05 01:28:44.179984 | orchestrator | FAILED - RETRYING: [localhost]: Wait for instance creation to complete (56 retries left). 2026-04-05 01:28:44.179998 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j355363043126.2828', 'results_file': '/ansible/.ansible_async/j355363043126.2828', 'changed': True, 'item': {'name': 'test', 'network': 'test-1'}, 'ansible_loop_var': 'item'}) 2026-04-05 01:28:44.180013 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j963205987434.2860', 'results_file': '/ansible/.ansible_async/j963205987434.2860', 'changed': True, 'item': {'name': 'test-1', 'network': 'test-1'}, 'ansible_loop_var': 'item'}) 2026-04-05 01:28:44.180024 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j200569612274.2885', 'results_file': '/ansible/.ansible_async/j200569612274.2885', 'changed': True, 'item': {'name': 'test-2', 'network': 'test-2'}, 'ansible_loop_var': 'item'}) 2026-04-05 01:28:44.180035 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j631084091099.2910', 'results_file': '/ansible/.ansible_async/j631084091099.2910', 'changed': True, 'item': {'name': 'test-3', 'network': 'test-2'}, 'ansible_loop_var': 'item'}) 2026-04-05 01:28:44.180054 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j283814792132.2935', 'results_file': '/ansible/.ansible_async/j283814792132.2935', 'changed': True, 'item': {'name': 'test-4', 'network': 'test-3'}, 'ansible_loop_var': 'item'}) 2026-04-05 01:28:44.180066 | orchestrator | 2026-04-05 01:28:44.180077 | orchestrator | TASK [Add metadata to instances] *********************************************** 2026-04-05 01:28:44.180088 | orchestrator | Sunday 05 April 2026 01:28:43 +0000 (0:00:58.052) 0:03:18.816 ********** 2026-04-05 01:28:44.180105 | orchestrator | changed: [localhost] => (item={'name': 'test', 'network': 'test-1'}) 2026-04-05 01:29:59.985491 | orchestrator | changed: [localhost] => (item={'name': 'test-1', 'network': 'test-1'}) 2026-04-05 01:29:59.985577 | orchestrator | changed: [localhost] => (item={'name': 'test-2', 'network': 'test-2'}) 2026-04-05 01:29:59.985588 | orchestrator | changed: [localhost] => (item={'name': 'test-3', 'network': 'test-2'}) 2026-04-05 01:29:59.985599 | orchestrator | changed: [localhost] => (item={'name': 'test-4', 'network': 'test-3'}) 2026-04-05 01:29:59.985609 | orchestrator | 2026-04-05 01:29:59.985619 | orchestrator | TASK [Wait for metadata to be added] ******************************************* 2026-04-05 01:29:59.985646 | orchestrator | Sunday 05 April 2026 01:28:48 +0000 (0:00:04.941) 0:03:23.758 ********** 2026-04-05 01:29:59.985666 | orchestrator | FAILED - RETRYING: [localhost]: Wait for metadata to be added (30 retries left). 2026-04-05 01:29:59.985687 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j458964149810.3047', 'results_file': '/ansible/.ansible_async/j458964149810.3047', 'changed': True, 'item': {'name': 'test', 'network': 'test-1'}, 'ansible_loop_var': 'item'}) 2026-04-05 01:29:59.985708 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j690779665097.3072', 'results_file': '/ansible/.ansible_async/j690779665097.3072', 'changed': True, 'item': {'name': 'test-1', 'network': 'test-1'}, 'ansible_loop_var': 'item'}) 2026-04-05 01:29:59.985725 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j894108689560.3097', 'results_file': '/ansible/.ansible_async/j894108689560.3097', 'changed': True, 'item': {'name': 'test-2', 'network': 'test-2'}, 'ansible_loop_var': 'item'}) 2026-04-05 01:29:59.985765 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j657583653725.3122', 'results_file': '/ansible/.ansible_async/j657583653725.3122', 'changed': True, 'item': {'name': 'test-3', 'network': 'test-2'}, 'ansible_loop_var': 'item'}) 2026-04-05 01:29:59.985785 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j208101361733.3147', 'results_file': '/ansible/.ansible_async/j208101361733.3147', 'changed': True, 'item': {'name': 'test-4', 'network': 'test-3'}, 'ansible_loop_var': 'item'}) 2026-04-05 01:29:59.985802 | orchestrator | 2026-04-05 01:29:59.985820 | orchestrator | TASK [Add tag to instances] **************************************************** 2026-04-05 01:29:59.985839 | orchestrator | Sunday 05 April 2026 01:28:57 +0000 (0:00:09.758) 0:03:33.517 ********** 2026-04-05 01:29:59.985857 | orchestrator | changed: [localhost] => (item={'name': 'test', 'network': 'test-1'}) 2026-04-05 01:29:59.985875 | orchestrator | changed: [localhost] => (item={'name': 'test-1', 'network': 'test-1'}) 2026-04-05 01:29:59.985894 | orchestrator | changed: [localhost] => (item={'name': 'test-2', 'network': 'test-2'}) 2026-04-05 01:29:59.985912 | orchestrator | changed: [localhost] => (item={'name': 'test-3', 'network': 'test-2'}) 2026-04-05 01:29:59.985931 | orchestrator | changed: [localhost] => (item={'name': 'test-4', 'network': 'test-3'}) 2026-04-05 01:29:59.985944 | orchestrator | 2026-04-05 01:29:59.985955 | orchestrator | TASK [Wait for tags to be added] *********************************************** 2026-04-05 01:29:59.985992 | orchestrator | Sunday 05 April 2026 01:29:02 +0000 (0:00:04.891) 0:03:38.408 ********** 2026-04-05 01:29:59.986003 | orchestrator | FAILED - RETRYING: [localhost]: Wait for tags to be added (30 retries left). 2026-04-05 01:29:59.986080 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j511370029110.3216', 'results_file': '/ansible/.ansible_async/j511370029110.3216', 'changed': True, 'item': {'name': 'test', 'network': 'test-1'}, 'ansible_loop_var': 'item'}) 2026-04-05 01:29:59.986096 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j972455735988.3241', 'results_file': '/ansible/.ansible_async/j972455735988.3241', 'changed': True, 'item': {'name': 'test-1', 'network': 'test-1'}, 'ansible_loop_var': 'item'}) 2026-04-05 01:29:59.986111 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j254311220625.3267', 'results_file': '/ansible/.ansible_async/j254311220625.3267', 'changed': True, 'item': {'name': 'test-2', 'network': 'test-2'}, 'ansible_loop_var': 'item'}) 2026-04-05 01:29:59.986125 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j522153230961.3293', 'results_file': '/ansible/.ansible_async/j522153230961.3293', 'changed': True, 'item': {'name': 'test-3', 'network': 'test-2'}, 'ansible_loop_var': 'item'}) 2026-04-05 01:29:59.986160 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j176562796338.3319', 'results_file': '/ansible/.ansible_async/j176562796338.3319', 'changed': True, 'item': {'name': 'test-4', 'network': 'test-3'}, 'ansible_loop_var': 'item'}) 2026-04-05 01:29:59.986174 | orchestrator | 2026-04-05 01:29:59.986188 | orchestrator | TASK [Create test volume] ****************************************************** 2026-04-05 01:29:59.986259 | orchestrator | Sunday 05 April 2026 01:29:13 +0000 (0:00:10.572) 0:03:48.980 ********** 2026-04-05 01:29:59.986273 | orchestrator | changed: [localhost] 2026-04-05 01:29:59.986288 | orchestrator | 2026-04-05 01:29:59.986301 | orchestrator | TASK [Attach test volume] ****************************************************** 2026-04-05 01:29:59.986313 | orchestrator | Sunday 05 April 2026 01:29:19 +0000 (0:00:06.656) 0:03:55.637 ********** 2026-04-05 01:29:59.986325 | orchestrator | changed: [localhost] 2026-04-05 01:29:59.986338 | orchestrator | 2026-04-05 01:29:59.986350 | orchestrator | TASK [Create floating ip addresses] ******************************************** 2026-04-05 01:29:59.986362 | orchestrator | Sunday 05 April 2026 01:29:33 +0000 (0:00:13.853) 0:04:09.490 ********** 2026-04-05 01:29:59.986376 | orchestrator | ok: [localhost] => (item={'name': 'test', 'network': 'test-1'}) 2026-04-05 01:29:59.986388 | orchestrator | ok: [localhost] => (item={'name': 'test-1', 'network': 'test-1'}) 2026-04-05 01:29:59.986401 | orchestrator | ok: [localhost] => (item={'name': 'test-2', 'network': 'test-2'}) 2026-04-05 01:29:59.986414 | orchestrator | ok: [localhost] => (item={'name': 'test-3', 'network': 'test-2'}) 2026-04-05 01:29:59.986426 | orchestrator | ok: [localhost] => (item={'name': 'test-4', 'network': 'test-3'}) 2026-04-05 01:29:59.986439 | orchestrator | 2026-04-05 01:29:59.986449 | orchestrator | TASK [Print floating ip addresses] ********************************************* 2026-04-05 01:29:59.986460 | orchestrator | Sunday 05 April 2026 01:29:59 +0000 (0:00:25.863) 0:04:35.354 ********** 2026-04-05 01:29:59.986471 | orchestrator | ok: [localhost] => (item=test) => { 2026-04-05 01:29:59.986482 | orchestrator |  "msg": "test: 192.168.112.102" 2026-04-05 01:29:59.986493 | orchestrator | } 2026-04-05 01:29:59.986505 | orchestrator | ok: [localhost] => (item=test-1) => { 2026-04-05 01:29:59.986516 | orchestrator |  "msg": "test-1: 192.168.112.128" 2026-04-05 01:29:59.986527 | orchestrator | } 2026-04-05 01:29:59.986537 | orchestrator | ok: [localhost] => (item=test-2) => { 2026-04-05 01:29:59.986548 | orchestrator |  "msg": "test-2: 192.168.112.116" 2026-04-05 01:29:59.986559 | orchestrator | } 2026-04-05 01:29:59.986570 | orchestrator | ok: [localhost] => (item=test-3) => { 2026-04-05 01:29:59.986593 | orchestrator |  "msg": "test-3: 192.168.112.193" 2026-04-05 01:29:59.986604 | orchestrator | } 2026-04-05 01:29:59.986625 | orchestrator | ok: [localhost] => (item=test-4) => { 2026-04-05 01:29:59.986636 | orchestrator |  "msg": "test-4: 192.168.112.111" 2026-04-05 01:29:59.986647 | orchestrator | } 2026-04-05 01:29:59.986658 | orchestrator | 2026-04-05 01:29:59.986669 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-05 01:29:59.986680 | orchestrator | localhost : ok=26  changed=23  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-04-05 01:29:59.986693 | orchestrator | 2026-04-05 01:29:59.986704 | orchestrator | 2026-04-05 01:29:59.986715 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-05 01:29:59.986725 | orchestrator | Sunday 05 April 2026 01:29:59 +0000 (0:00:00.123) 0:04:35.477 ********** 2026-04-05 01:29:59.986736 | orchestrator | =============================================================================== 2026-04-05 01:29:59.986747 | orchestrator | Wait for instance creation to complete --------------------------------- 58.05s 2026-04-05 01:29:59.986758 | orchestrator | Create test routers ---------------------------------------------------- 34.14s 2026-04-05 01:29:59.986768 | orchestrator | Create floating ip addresses ------------------------------------------- 25.86s 2026-04-05 01:29:59.986779 | orchestrator | Create test subnets ---------------------------------------------------- 17.02s 2026-04-05 01:29:59.986790 | orchestrator | Create test networks --------------------------------------------------- 15.35s 2026-04-05 01:29:59.986800 | orchestrator | Attach test volume ----------------------------------------------------- 13.85s 2026-04-05 01:29:59.986811 | orchestrator | Add member roles to user test ------------------------------------------ 12.50s 2026-04-05 01:29:59.986822 | orchestrator | Wait for tags to be added ---------------------------------------------- 10.57s 2026-04-05 01:29:59.986833 | orchestrator | Wait for metadata to be added ------------------------------------------- 9.76s 2026-04-05 01:29:59.986844 | orchestrator | Add manager role to user test-admin ------------------------------------- 6.98s 2026-04-05 01:29:59.986863 | orchestrator | Create test volume ------------------------------------------------------ 6.66s 2026-04-05 01:29:59.986881 | orchestrator | Create ssh security group ----------------------------------------------- 5.43s 2026-04-05 01:29:59.986899 | orchestrator | Create test instances --------------------------------------------------- 5.35s 2026-04-05 01:29:59.986918 | orchestrator | Add metadata to instances ----------------------------------------------- 4.94s 2026-04-05 01:29:59.986937 | orchestrator | Add tag to instances ---------------------------------------------------- 4.89s 2026-04-05 01:29:59.986955 | orchestrator | Create test server group ------------------------------------------------ 4.65s 2026-04-05 01:29:59.986973 | orchestrator | Create test-admin user -------------------------------------------------- 4.65s 2026-04-05 01:29:59.986994 | orchestrator | Create test user -------------------------------------------------------- 4.57s 2026-04-05 01:29:59.987013 | orchestrator | Add rule to ssh security group ------------------------------------------ 4.55s 2026-04-05 01:29:59.987032 | orchestrator | Create test keypair ----------------------------------------------------- 4.40s 2026-04-05 01:30:00.210586 | orchestrator | + server_list 2026-04-05 01:30:00.210702 | orchestrator | + openstack --os-cloud test server list 2026-04-05 01:30:04.228867 | orchestrator | +--------------------------------------+--------+--------+-----------------------------------------+--------------------------+----------+ 2026-04-05 01:30:04.228982 | orchestrator | | ID | Name | Status | Networks | Image | Flavor | 2026-04-05 01:30:04.228998 | orchestrator | +--------------------------------------+--------+--------+-----------------------------------------+--------------------------+----------+ 2026-04-05 01:30:04.229010 | orchestrator | | e707db68-9716-42ed-ab71-c8afc05b7d2e | test-3 | ACTIVE | test-2=192.168.112.193, 192.168.201.235 | N/A (booted from volume) | SCS-1L-1 | 2026-04-05 01:30:04.229021 | orchestrator | | df6a7122-009a-4906-ab7b-ffcfc5c9cde5 | test-4 | ACTIVE | test-3=192.168.112.111, 192.168.202.183 | N/A (booted from volume) | SCS-1L-1 | 2026-04-05 01:30:04.229058 | orchestrator | | 09032338-5e97-4fde-ad06-da67c25a0e39 | test-1 | ACTIVE | test-1=192.168.112.128, 192.168.200.142 | N/A (booted from volume) | SCS-1L-1 | 2026-04-05 01:30:04.229070 | orchestrator | | 42003049-8258-4648-9ccf-4b69d906ac3b | test-2 | ACTIVE | test-2=192.168.112.116, 192.168.201.251 | N/A (booted from volume) | SCS-1L-1 | 2026-04-05 01:30:04.229081 | orchestrator | | 5d0b6991-2fb9-4f5e-9c24-c349d2c523cd | test | ACTIVE | test-1=192.168.112.102, 192.168.200.61 | N/A (booted from volume) | SCS-1L-1 | 2026-04-05 01:30:04.229092 | orchestrator | +--------------------------------------+--------+--------+-----------------------------------------+--------------------------+----------+ 2026-04-05 01:30:04.549919 | orchestrator | + openstack --os-cloud test server show test 2026-04-05 01:30:08.122473 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-04-05 01:30:08.122601 | orchestrator | | Field | Value | 2026-04-05 01:30:08.122618 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-04-05 01:30:08.122631 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2026-04-05 01:30:08.122643 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2026-04-05 01:30:08.122655 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2026-04-05 01:30:08.122666 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test | 2026-04-05 01:30:08.122679 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2026-04-05 01:30:08.122709 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2026-04-05 01:30:08.122740 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2026-04-05 01:30:08.122758 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2026-04-05 01:30:08.122770 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2026-04-05 01:30:08.122781 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2026-04-05 01:30:08.122792 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2026-04-05 01:30:08.122804 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2026-04-05 01:30:08.122815 | orchestrator | | OS-EXT-STS:power_state | Running | 2026-04-05 01:30:08.122826 | orchestrator | | OS-EXT-STS:task_state | None | 2026-04-05 01:30:08.122846 | orchestrator | | OS-EXT-STS:vm_state | active | 2026-04-05 01:30:08.122857 | orchestrator | | OS-SRV-USG:launched_at | 2026-04-05T01:28:19.000000 | 2026-04-05 01:30:08.122876 | orchestrator | | OS-SRV-USG:terminated_at | None | 2026-04-05 01:30:08.122893 | orchestrator | | accessIPv4 | | 2026-04-05 01:30:08.122905 | orchestrator | | accessIPv6 | | 2026-04-05 01:30:08.122916 | orchestrator | | addresses | test-1=192.168.112.102, 192.168.200.61 | 2026-04-05 01:30:08.122927 | orchestrator | | config_drive | | 2026-04-05 01:30:08.122939 | orchestrator | | created | 2026-04-05T01:27:49Z | 2026-04-05 01:30:08.122950 | orchestrator | | description | None | 2026-04-05 01:30:08.122971 | orchestrator | | flavor | description=, disk='0', ephemeral='0', extra_specs.hw_rng:allowed='True', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:disk0-type='network', extra_specs.scs:name-v1='SCS-1L:1', extra_specs.scs:name-v2='SCS-1L-1', id='SCS-1L-1', is_disabled=, is_public='True', location=, name='SCS-1L-1', original_name='SCS-1L-1', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2026-04-05 01:30:08.122985 | orchestrator | | hostId | a5787ce80366ffa87b7d9d431a6a7deb70b1182ee25143d36da79b75 | 2026-04-05 01:30:08.122999 | orchestrator | | host_status | None | 2026-04-05 01:30:08.123020 | orchestrator | | id | 5d0b6991-2fb9-4f5e-9c24-c349d2c523cd | 2026-04-05 01:30:08.123040 | orchestrator | | image | N/A (booted from volume) | 2026-04-05 01:30:08.123055 | orchestrator | | key_name | test | 2026-04-05 01:30:08.123067 | orchestrator | | locked | False | 2026-04-05 01:30:08.123078 | orchestrator | | locked_reason | None | 2026-04-05 01:30:08.123090 | orchestrator | | name | test | 2026-04-05 01:30:08.123107 | orchestrator | | pinned_availability_zone | None | 2026-04-05 01:30:08.123119 | orchestrator | | progress | 0 | 2026-04-05 01:30:08.123130 | orchestrator | | project_id | ba88b993484a402381dfd70309321fac | 2026-04-05 01:30:08.123141 | orchestrator | | properties | hostname='test' | 2026-04-05 01:30:08.123160 | orchestrator | | security_groups | name='ssh' | 2026-04-05 01:30:08.123172 | orchestrator | | | name='icmp' | 2026-04-05 01:30:08.123183 | orchestrator | | server_groups | None | 2026-04-05 01:30:08.123194 | orchestrator | | status | ACTIVE | 2026-04-05 01:30:08.123261 | orchestrator | | tags | test | 2026-04-05 01:30:08.123274 | orchestrator | | trusted_image_certificates | None | 2026-04-05 01:30:08.123304 | orchestrator | | updated | 2026-04-05T01:28:49Z | 2026-04-05 01:30:08.123316 | orchestrator | | user_id | 905a6854adc8413da1fb65dd2e05f27e | 2026-04-05 01:30:08.123328 | orchestrator | | volumes_attached | delete_on_termination='True', id='09c35500-cf59-4ff8-b83c-2b5d84d00e83' | 2026-04-05 01:30:08.123339 | orchestrator | | | delete_on_termination='False', id='9bc87dd9-f241-442d-9ae5-62e793e8586d' | 2026-04-05 01:30:08.125681 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-04-05 01:30:08.449746 | orchestrator | + openstack --os-cloud test server show test-1 2026-04-05 01:30:11.648006 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-04-05 01:30:11.648147 | orchestrator | | Field | Value | 2026-04-05 01:30:11.648164 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-04-05 01:30:11.648177 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2026-04-05 01:30:11.648247 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2026-04-05 01:30:11.648260 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2026-04-05 01:30:11.648271 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test-1 | 2026-04-05 01:30:11.648282 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2026-04-05 01:30:11.648294 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2026-04-05 01:30:11.648322 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2026-04-05 01:30:11.648356 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2026-04-05 01:30:11.648378 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2026-04-05 01:30:11.648390 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2026-04-05 01:30:11.648410 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2026-04-05 01:30:11.648421 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2026-04-05 01:30:11.648432 | orchestrator | | OS-EXT-STS:power_state | Running | 2026-04-05 01:30:11.648443 | orchestrator | | OS-EXT-STS:task_state | None | 2026-04-05 01:30:11.648454 | orchestrator | | OS-EXT-STS:vm_state | active | 2026-04-05 01:30:11.648465 | orchestrator | | OS-SRV-USG:launched_at | 2026-04-05T01:28:17.000000 | 2026-04-05 01:30:11.648483 | orchestrator | | OS-SRV-USG:terminated_at | None | 2026-04-05 01:30:11.648501 | orchestrator | | accessIPv4 | | 2026-04-05 01:30:11.648516 | orchestrator | | accessIPv6 | | 2026-04-05 01:30:11.648537 | orchestrator | | addresses | test-1=192.168.112.128, 192.168.200.142 | 2026-04-05 01:30:11.648550 | orchestrator | | config_drive | | 2026-04-05 01:30:11.648564 | orchestrator | | created | 2026-04-05T01:27:50Z | 2026-04-05 01:30:11.648577 | orchestrator | | description | None | 2026-04-05 01:30:11.648591 | orchestrator | | flavor | description=, disk='0', ephemeral='0', extra_specs.hw_rng:allowed='True', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:disk0-type='network', extra_specs.scs:name-v1='SCS-1L:1', extra_specs.scs:name-v2='SCS-1L-1', id='SCS-1L-1', is_disabled=, is_public='True', location=, name='SCS-1L-1', original_name='SCS-1L-1', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2026-04-05 01:30:11.648603 | orchestrator | | hostId | a5787ce80366ffa87b7d9d431a6a7deb70b1182ee25143d36da79b75 | 2026-04-05 01:30:11.648622 | orchestrator | | host_status | None | 2026-04-05 01:30:11.648647 | orchestrator | | id | 09032338-5e97-4fde-ad06-da67c25a0e39 | 2026-04-05 01:30:11.648666 | orchestrator | | image | N/A (booted from volume) | 2026-04-05 01:30:11.648679 | orchestrator | | key_name | test | 2026-04-05 01:30:11.648700 | orchestrator | | locked | False | 2026-04-05 01:30:11.648713 | orchestrator | | locked_reason | None | 2026-04-05 01:30:11.648727 | orchestrator | | name | test-1 | 2026-04-05 01:30:11.648740 | orchestrator | | pinned_availability_zone | None | 2026-04-05 01:30:11.648753 | orchestrator | | progress | 0 | 2026-04-05 01:30:11.648766 | orchestrator | | project_id | ba88b993484a402381dfd70309321fac | 2026-04-05 01:30:11.648779 | orchestrator | | properties | hostname='test-1' | 2026-04-05 01:30:11.648803 | orchestrator | | security_groups | name='ssh' | 2026-04-05 01:30:11.648818 | orchestrator | | | name='icmp' | 2026-04-05 01:30:11.648838 | orchestrator | | server_groups | None | 2026-04-05 01:30:11.648853 | orchestrator | | status | ACTIVE | 2026-04-05 01:30:11.648878 | orchestrator | | tags | test | 2026-04-05 01:30:11.648899 | orchestrator | | trusted_image_certificates | None | 2026-04-05 01:30:11.648911 | orchestrator | | updated | 2026-04-05T01:28:50Z | 2026-04-05 01:30:11.648922 | orchestrator | | user_id | 905a6854adc8413da1fb65dd2e05f27e | 2026-04-05 01:30:11.648933 | orchestrator | | volumes_attached | delete_on_termination='True', id='23bb8bb1-2b36-4e3e-af29-481582974885' | 2026-04-05 01:30:11.650695 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-04-05 01:30:11.847656 | orchestrator | + openstack --os-cloud test server show test-2 2026-04-05 01:30:14.729668 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-04-05 01:30:14.729826 | orchestrator | | Field | Value | 2026-04-05 01:30:14.729847 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-04-05 01:30:14.729877 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2026-04-05 01:30:14.729890 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2026-04-05 01:30:14.729901 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2026-04-05 01:30:14.729913 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test-2 | 2026-04-05 01:30:14.729924 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2026-04-05 01:30:14.729936 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2026-04-05 01:30:14.729967 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2026-04-05 01:30:14.729993 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2026-04-05 01:30:14.730006 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2026-04-05 01:30:14.730126 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2026-04-05 01:30:14.730143 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2026-04-05 01:30:14.730155 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2026-04-05 01:30:14.730166 | orchestrator | | OS-EXT-STS:power_state | Running | 2026-04-05 01:30:14.730178 | orchestrator | | OS-EXT-STS:task_state | None | 2026-04-05 01:30:14.730189 | orchestrator | | OS-EXT-STS:vm_state | active | 2026-04-05 01:30:14.730201 | orchestrator | | OS-SRV-USG:launched_at | 2026-04-05T01:28:17.000000 | 2026-04-05 01:30:14.730286 | orchestrator | | OS-SRV-USG:terminated_at | None | 2026-04-05 01:30:14.730305 | orchestrator | | accessIPv4 | | 2026-04-05 01:30:14.730317 | orchestrator | | accessIPv6 | | 2026-04-05 01:30:14.730329 | orchestrator | | addresses | test-2=192.168.112.116, 192.168.201.251 | 2026-04-05 01:30:14.730340 | orchestrator | | config_drive | | 2026-04-05 01:30:14.730352 | orchestrator | | created | 2026-04-05T01:27:50Z | 2026-04-05 01:30:14.730364 | orchestrator | | description | None | 2026-04-05 01:30:14.730375 | orchestrator | | flavor | description=, disk='0', ephemeral='0', extra_specs.hw_rng:allowed='True', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:disk0-type='network', extra_specs.scs:name-v1='SCS-1L:1', extra_specs.scs:name-v2='SCS-1L-1', id='SCS-1L-1', is_disabled=, is_public='True', location=, name='SCS-1L-1', original_name='SCS-1L-1', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2026-04-05 01:30:14.730387 | orchestrator | | hostId | bacc9e311bd2e0ef885dc2bbf4baac5f8fc49ce0fb9ff28d5a1eb482 | 2026-04-05 01:30:14.730398 | orchestrator | | host_status | None | 2026-04-05 01:30:14.730425 | orchestrator | | id | 42003049-8258-4648-9ccf-4b69d906ac3b | 2026-04-05 01:30:14.730442 | orchestrator | | image | N/A (booted from volume) | 2026-04-05 01:30:14.730454 | orchestrator | | key_name | test | 2026-04-05 01:30:14.730466 | orchestrator | | locked | False | 2026-04-05 01:30:14.730477 | orchestrator | | locked_reason | None | 2026-04-05 01:30:14.730489 | orchestrator | | name | test-2 | 2026-04-05 01:30:14.730501 | orchestrator | | pinned_availability_zone | None | 2026-04-05 01:30:14.730512 | orchestrator | | progress | 0 | 2026-04-05 01:30:14.730524 | orchestrator | | project_id | ba88b993484a402381dfd70309321fac | 2026-04-05 01:30:14.730542 | orchestrator | | properties | hostname='test-2' | 2026-04-05 01:30:14.730561 | orchestrator | | security_groups | name='ssh' | 2026-04-05 01:30:14.730579 | orchestrator | | | name='icmp' | 2026-04-05 01:30:14.730591 | orchestrator | | server_groups | None | 2026-04-05 01:30:14.730603 | orchestrator | | status | ACTIVE | 2026-04-05 01:30:14.730621 | orchestrator | | tags | test | 2026-04-05 01:30:14.730639 | orchestrator | | trusted_image_certificates | None | 2026-04-05 01:30:14.730656 | orchestrator | | updated | 2026-04-05T01:28:50Z | 2026-04-05 01:30:14.730673 | orchestrator | | user_id | 905a6854adc8413da1fb65dd2e05f27e | 2026-04-05 01:30:14.730698 | orchestrator | | volumes_attached | delete_on_termination='True', id='79e2b157-893e-4c4f-aed1-08384e05d7f2' | 2026-04-05 01:30:14.731957 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-04-05 01:30:15.031272 | orchestrator | + openstack --os-cloud test server show test-3 2026-04-05 01:30:18.126561 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-04-05 01:30:18.126645 | orchestrator | | Field | Value | 2026-04-05 01:30:18.126654 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-04-05 01:30:18.126661 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2026-04-05 01:30:18.126668 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2026-04-05 01:30:18.126675 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2026-04-05 01:30:18.126681 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test-3 | 2026-04-05 01:30:18.126701 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2026-04-05 01:30:18.126708 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2026-04-05 01:30:18.126726 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2026-04-05 01:30:18.126734 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2026-04-05 01:30:18.126743 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2026-04-05 01:30:18.126750 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2026-04-05 01:30:18.126756 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2026-04-05 01:30:18.126763 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2026-04-05 01:30:18.126769 | orchestrator | | OS-EXT-STS:power_state | Running | 2026-04-05 01:30:18.126780 | orchestrator | | OS-EXT-STS:task_state | None | 2026-04-05 01:30:18.126787 | orchestrator | | OS-EXT-STS:vm_state | active | 2026-04-05 01:30:18.126793 | orchestrator | | OS-SRV-USG:launched_at | 2026-04-05T01:28:21.000000 | 2026-04-05 01:30:18.126804 | orchestrator | | OS-SRV-USG:terminated_at | None | 2026-04-05 01:30:18.126811 | orchestrator | | accessIPv4 | | 2026-04-05 01:30:18.127065 | orchestrator | | accessIPv6 | | 2026-04-05 01:30:18.127081 | orchestrator | | addresses | test-2=192.168.112.193, 192.168.201.235 | 2026-04-05 01:30:18.127092 | orchestrator | | config_drive | | 2026-04-05 01:30:18.127104 | orchestrator | | created | 2026-04-05T01:27:54Z | 2026-04-05 01:30:18.127115 | orchestrator | | description | None | 2026-04-05 01:30:18.127132 | orchestrator | | flavor | description=, disk='0', ephemeral='0', extra_specs.hw_rng:allowed='True', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:disk0-type='network', extra_specs.scs:name-v1='SCS-1L:1', extra_specs.scs:name-v2='SCS-1L-1', id='SCS-1L-1', is_disabled=, is_public='True', location=, name='SCS-1L-1', original_name='SCS-1L-1', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2026-04-05 01:30:18.127139 | orchestrator | | hostId | dc990d04bbe3b7a762c943c0cecdc90ce6f0e5e857c4608b10e4ad5b | 2026-04-05 01:30:18.127151 | orchestrator | | host_status | None | 2026-04-05 01:30:18.127165 | orchestrator | | id | e707db68-9716-42ed-ab71-c8afc05b7d2e | 2026-04-05 01:30:18.127173 | orchestrator | | image | N/A (booted from volume) | 2026-04-05 01:30:18.127181 | orchestrator | | key_name | test | 2026-04-05 01:30:18.127188 | orchestrator | | locked | False | 2026-04-05 01:30:18.127196 | orchestrator | | locked_reason | None | 2026-04-05 01:30:18.127203 | orchestrator | | name | test-3 | 2026-04-05 01:30:18.127240 | orchestrator | | pinned_availability_zone | None | 2026-04-05 01:30:18.127250 | orchestrator | | progress | 0 | 2026-04-05 01:30:18.127266 | orchestrator | | project_id | ba88b993484a402381dfd70309321fac | 2026-04-05 01:30:18.127277 | orchestrator | | properties | hostname='test-3' | 2026-04-05 01:30:18.127294 | orchestrator | | security_groups | name='ssh' | 2026-04-05 01:30:18.127306 | orchestrator | | | name='icmp' | 2026-04-05 01:30:18.127317 | orchestrator | | server_groups | None | 2026-04-05 01:30:18.127329 | orchestrator | | status | ACTIVE | 2026-04-05 01:30:18.127337 | orchestrator | | tags | test | 2026-04-05 01:30:18.127350 | orchestrator | | trusted_image_certificates | None | 2026-04-05 01:30:18.127358 | orchestrator | | updated | 2026-04-05T01:28:51Z | 2026-04-05 01:30:18.127366 | orchestrator | | user_id | 905a6854adc8413da1fb65dd2e05f27e | 2026-04-05 01:30:18.127378 | orchestrator | | volumes_attached | delete_on_termination='True', id='6b4714b3-d288-4dfa-9a45-a5d10f64ffeb' | 2026-04-05 01:30:18.128197 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-04-05 01:30:18.419460 | orchestrator | + openstack --os-cloud test server show test-4 2026-04-05 01:30:21.472282 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-04-05 01:30:21.472423 | orchestrator | | Field | Value | 2026-04-05 01:30:21.472445 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-04-05 01:30:21.472460 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2026-04-05 01:30:21.472501 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2026-04-05 01:30:21.472517 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2026-04-05 01:30:21.472530 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test-4 | 2026-04-05 01:30:21.472544 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2026-04-05 01:30:21.472573 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2026-04-05 01:30:21.472621 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2026-04-05 01:30:21.472637 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2026-04-05 01:30:21.472650 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2026-04-05 01:30:21.472665 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2026-04-05 01:30:21.472696 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2026-04-05 01:30:21.472709 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2026-04-05 01:30:21.472721 | orchestrator | | OS-EXT-STS:power_state | Running | 2026-04-05 01:30:21.472732 | orchestrator | | OS-EXT-STS:task_state | None | 2026-04-05 01:30:21.472744 | orchestrator | | OS-EXT-STS:vm_state | active | 2026-04-05 01:30:21.472759 | orchestrator | | OS-SRV-USG:launched_at | 2026-04-05T01:28:17.000000 | 2026-04-05 01:30:21.472779 | orchestrator | | OS-SRV-USG:terminated_at | None | 2026-04-05 01:30:21.472792 | orchestrator | | accessIPv4 | | 2026-04-05 01:30:21.472805 | orchestrator | | accessIPv6 | | 2026-04-05 01:30:21.472817 | orchestrator | | addresses | test-3=192.168.112.111, 192.168.202.183 | 2026-04-05 01:30:21.472838 | orchestrator | | config_drive | | 2026-04-05 01:30:21.472850 | orchestrator | | created | 2026-04-05T01:27:53Z | 2026-04-05 01:30:21.472862 | orchestrator | | description | None | 2026-04-05 01:30:21.472874 | orchestrator | | flavor | description=, disk='0', ephemeral='0', extra_specs.hw_rng:allowed='True', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:disk0-type='network', extra_specs.scs:name-v1='SCS-1L:1', extra_specs.scs:name-v2='SCS-1L-1', id='SCS-1L-1', is_disabled=, is_public='True', location=, name='SCS-1L-1', original_name='SCS-1L-1', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2026-04-05 01:30:21.472887 | orchestrator | | hostId | bacc9e311bd2e0ef885dc2bbf4baac5f8fc49ce0fb9ff28d5a1eb482 | 2026-04-05 01:30:21.472904 | orchestrator | | host_status | None | 2026-04-05 01:30:21.472924 | orchestrator | | id | df6a7122-009a-4906-ab7b-ffcfc5c9cde5 | 2026-04-05 01:30:21.472936 | orchestrator | | image | N/A (booted from volume) | 2026-04-05 01:30:21.472948 | orchestrator | | key_name | test | 2026-04-05 01:30:21.472971 | orchestrator | | locked | False | 2026-04-05 01:30:21.472984 | orchestrator | | locked_reason | None | 2026-04-05 01:30:21.472996 | orchestrator | | name | test-4 | 2026-04-05 01:30:21.473008 | orchestrator | | pinned_availability_zone | None | 2026-04-05 01:30:21.473020 | orchestrator | | progress | 0 | 2026-04-05 01:30:21.473032 | orchestrator | | project_id | ba88b993484a402381dfd70309321fac | 2026-04-05 01:30:21.473044 | orchestrator | | properties | hostname='test-4' | 2026-04-05 01:30:21.473062 | orchestrator | | security_groups | name='ssh' | 2026-04-05 01:30:21.473075 | orchestrator | | | name='icmp' | 2026-04-05 01:30:21.473095 | orchestrator | | server_groups | None | 2026-04-05 01:30:21.473107 | orchestrator | | status | ACTIVE | 2026-04-05 01:30:21.473119 | orchestrator | | tags | test | 2026-04-05 01:30:21.473131 | orchestrator | | trusted_image_certificates | None | 2026-04-05 01:30:21.473143 | orchestrator | | updated | 2026-04-05T01:28:52Z | 2026-04-05 01:30:21.473155 | orchestrator | | user_id | 905a6854adc8413da1fb65dd2e05f27e | 2026-04-05 01:30:21.473255 | orchestrator | | volumes_attached | delete_on_termination='True', id='4a5043b4-eed8-41ef-b823-172de1009a8c' | 2026-04-05 01:30:21.475454 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-04-05 01:30:21.768323 | orchestrator | + server_ping 2026-04-05 01:30:21.768821 | orchestrator | ++ tr -d '\r' 2026-04-05 01:30:21.769102 | orchestrator | ++ openstack --os-cloud test floating ip list --status ACTIVE -f value -c 'Floating IP Address' 2026-04-05 01:30:24.937078 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-04-05 01:30:24.937201 | orchestrator | + ping -c3 192.168.112.102 2026-04-05 01:30:24.950319 | orchestrator | PING 192.168.112.102 (192.168.112.102) 56(84) bytes of data. 2026-04-05 01:30:24.950471 | orchestrator | 64 bytes from 192.168.112.102: icmp_seq=1 ttl=63 time=7.50 ms 2026-04-05 01:30:25.947520 | orchestrator | 64 bytes from 192.168.112.102: icmp_seq=2 ttl=63 time=2.51 ms 2026-04-05 01:30:26.949031 | orchestrator | 64 bytes from 192.168.112.102: icmp_seq=3 ttl=63 time=1.85 ms 2026-04-05 01:30:26.949140 | orchestrator | 2026-04-05 01:30:26.949156 | orchestrator | --- 192.168.112.102 ping statistics --- 2026-04-05 01:30:26.949167 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2004ms 2026-04-05 01:30:26.949177 | orchestrator | rtt min/avg/max/mdev = 1.853/3.955/7.499/2.520 ms 2026-04-05 01:30:26.949393 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-04-05 01:30:26.949415 | orchestrator | + ping -c3 192.168.112.116 2026-04-05 01:30:26.961993 | orchestrator | PING 192.168.112.116 (192.168.112.116) 56(84) bytes of data. 2026-04-05 01:30:26.962190 | orchestrator | 64 bytes from 192.168.112.116: icmp_seq=1 ttl=63 time=8.24 ms 2026-04-05 01:30:27.957550 | orchestrator | 64 bytes from 192.168.112.116: icmp_seq=2 ttl=63 time=2.44 ms 2026-04-05 01:30:28.959206 | orchestrator | 64 bytes from 192.168.112.116: icmp_seq=3 ttl=63 time=1.69 ms 2026-04-05 01:30:28.959468 | orchestrator | 2026-04-05 01:30:28.959486 | orchestrator | --- 192.168.112.116 ping statistics --- 2026-04-05 01:30:28.959501 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-04-05 01:30:28.959512 | orchestrator | rtt min/avg/max/mdev = 1.691/4.121/8.238/2.926 ms 2026-04-05 01:30:28.959536 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-04-05 01:30:28.959549 | orchestrator | + ping -c3 192.168.112.111 2026-04-05 01:30:28.972350 | orchestrator | PING 192.168.112.111 (192.168.112.111) 56(84) bytes of data. 2026-04-05 01:30:28.972414 | orchestrator | 64 bytes from 192.168.112.111: icmp_seq=1 ttl=63 time=7.81 ms 2026-04-05 01:30:29.969404 | orchestrator | 64 bytes from 192.168.112.111: icmp_seq=2 ttl=63 time=2.20 ms 2026-04-05 01:30:30.970804 | orchestrator | 64 bytes from 192.168.112.111: icmp_seq=3 ttl=63 time=1.62 ms 2026-04-05 01:30:30.970918 | orchestrator | 2026-04-05 01:30:30.970940 | orchestrator | --- 192.168.112.111 ping statistics --- 2026-04-05 01:30:30.970956 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2004ms 2026-04-05 01:30:30.970971 | orchestrator | rtt min/avg/max/mdev = 1.623/3.878/7.810/2.790 ms 2026-04-05 01:30:30.970987 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-04-05 01:30:30.971002 | orchestrator | + ping -c3 192.168.112.128 2026-04-05 01:30:30.983333 | orchestrator | PING 192.168.112.128 (192.168.112.128) 56(84) bytes of data. 2026-04-05 01:30:30.983429 | orchestrator | 64 bytes from 192.168.112.128: icmp_seq=1 ttl=63 time=7.56 ms 2026-04-05 01:30:31.980011 | orchestrator | 64 bytes from 192.168.112.128: icmp_seq=2 ttl=63 time=1.96 ms 2026-04-05 01:30:32.981682 | orchestrator | 64 bytes from 192.168.112.128: icmp_seq=3 ttl=63 time=1.92 ms 2026-04-05 01:30:32.981783 | orchestrator | 2026-04-05 01:30:32.981799 | orchestrator | --- 192.168.112.128 ping statistics --- 2026-04-05 01:30:32.981813 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-04-05 01:30:32.981824 | orchestrator | rtt min/avg/max/mdev = 1.917/3.812/7.561/2.650 ms 2026-04-05 01:30:32.981836 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-04-05 01:30:32.981849 | orchestrator | + ping -c3 192.168.112.193 2026-04-05 01:30:32.992551 | orchestrator | PING 192.168.112.193 (192.168.112.193) 56(84) bytes of data. 2026-04-05 01:30:32.992624 | orchestrator | 64 bytes from 192.168.112.193: icmp_seq=1 ttl=63 time=6.26 ms 2026-04-05 01:30:33.990480 | orchestrator | 64 bytes from 192.168.112.193: icmp_seq=2 ttl=63 time=2.24 ms 2026-04-05 01:30:34.991963 | orchestrator | 64 bytes from 192.168.112.193: icmp_seq=3 ttl=63 time=1.85 ms 2026-04-05 01:30:34.992142 | orchestrator | 2026-04-05 01:30:34.992160 | orchestrator | --- 192.168.112.193 ping statistics --- 2026-04-05 01:30:34.992172 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-04-05 01:30:34.992183 | orchestrator | rtt min/avg/max/mdev = 1.849/3.446/6.255/1.992 ms 2026-04-05 01:30:34.992202 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2026-04-05 01:30:34.992212 | orchestrator | + compute_list 2026-04-05 01:30:34.992326 | orchestrator | + osism manage compute list testbed-node-3 2026-04-05 01:30:36.648178 | orchestrator | 2026-04-05 01:30:36 | ERROR  | Unable to get ansible vault password 2026-04-05 01:30:36.648324 | orchestrator | 2026-04-05 01:30:36 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-04-05 01:30:36.648343 | orchestrator | 2026-04-05 01:30:36 | ERROR  | Dropping encrypted entries 2026-04-05 01:30:40.613496 | orchestrator | +--------------------------------------+--------+----------+ 2026-04-05 01:30:40.613594 | orchestrator | | ID | Name | Status | 2026-04-05 01:30:40.613607 | orchestrator | |--------------------------------------+--------+----------| 2026-04-05 01:30:40.613617 | orchestrator | | e707db68-9716-42ed-ab71-c8afc05b7d2e | test-3 | ACTIVE | 2026-04-05 01:30:40.613627 | orchestrator | +--------------------------------------+--------+----------+ 2026-04-05 01:30:40.978885 | orchestrator | + osism manage compute list testbed-node-4 2026-04-05 01:30:42.638166 | orchestrator | 2026-04-05 01:30:42 | ERROR  | Unable to get ansible vault password 2026-04-05 01:30:42.638387 | orchestrator | 2026-04-05 01:30:42 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-04-05 01:30:42.638421 | orchestrator | 2026-04-05 01:30:42 | ERROR  | Dropping encrypted entries 2026-04-05 01:30:44.631623 | orchestrator | +--------------------------------------+--------+----------+ 2026-04-05 01:30:44.631729 | orchestrator | | ID | Name | Status | 2026-04-05 01:30:44.631741 | orchestrator | |--------------------------------------+--------+----------| 2026-04-05 01:30:44.631750 | orchestrator | | 09032338-5e97-4fde-ad06-da67c25a0e39 | test-1 | ACTIVE | 2026-04-05 01:30:44.631759 | orchestrator | | 5d0b6991-2fb9-4f5e-9c24-c349d2c523cd | test | ACTIVE | 2026-04-05 01:30:44.631767 | orchestrator | +--------------------------------------+--------+----------+ 2026-04-05 01:30:44.987901 | orchestrator | + osism manage compute list testbed-node-5 2026-04-05 01:30:46.699900 | orchestrator | 2026-04-05 01:30:46 | ERROR  | Unable to get ansible vault password 2026-04-05 01:30:46.700168 | orchestrator | 2026-04-05 01:30:46 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-04-05 01:30:46.700186 | orchestrator | 2026-04-05 01:30:46 | ERROR  | Dropping encrypted entries 2026-04-05 01:30:48.653135 | orchestrator | +--------------------------------------+--------+----------+ 2026-04-05 01:30:48.653319 | orchestrator | | ID | Name | Status | 2026-04-05 01:30:48.653337 | orchestrator | |--------------------------------------+--------+----------| 2026-04-05 01:30:48.653350 | orchestrator | | df6a7122-009a-4906-ab7b-ffcfc5c9cde5 | test-4 | ACTIVE | 2026-04-05 01:30:48.653370 | orchestrator | | 42003049-8258-4648-9ccf-4b69d906ac3b | test-2 | ACTIVE | 2026-04-05 01:30:48.653389 | orchestrator | +--------------------------------------+--------+----------+ 2026-04-05 01:30:49.009640 | orchestrator | + osism manage compute migrate --yes --target testbed-node-3 testbed-node-4 2026-04-05 01:30:50.658864 | orchestrator | 2026-04-05 01:30:50 | ERROR  | Unable to get ansible vault password 2026-04-05 01:30:50.658998 | orchestrator | 2026-04-05 01:30:50 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-04-05 01:30:50.659016 | orchestrator | 2026-04-05 01:30:50 | ERROR  | Dropping encrypted entries 2026-04-05 01:30:52.437098 | orchestrator | 2026-04-05 01:30:52 | INFO  | Live migrating server 09032338-5e97-4fde-ad06-da67c25a0e39 2026-04-05 01:31:06.396844 | orchestrator | 2026-04-05 01:31:06 | INFO  | Live migration of 09032338-5e97-4fde-ad06-da67c25a0e39 (test-1) is still in progress 2026-04-05 01:31:08.825129 | orchestrator | 2026-04-05 01:31:08 | INFO  | Live migration of 09032338-5e97-4fde-ad06-da67c25a0e39 (test-1) is still in progress 2026-04-05 01:31:11.271677 | orchestrator | 2026-04-05 01:31:11 | INFO  | Live migration of 09032338-5e97-4fde-ad06-da67c25a0e39 (test-1) is still in progress 2026-04-05 01:31:13.584811 | orchestrator | 2026-04-05 01:31:13 | INFO  | Live migration of 09032338-5e97-4fde-ad06-da67c25a0e39 (test-1) is still in progress 2026-04-05 01:31:15.915815 | orchestrator | 2026-04-05 01:31:15 | INFO  | Live migration of 09032338-5e97-4fde-ad06-da67c25a0e39 (test-1) is still in progress 2026-04-05 01:31:18.329058 | orchestrator | 2026-04-05 01:31:18 | INFO  | Live migration of 09032338-5e97-4fde-ad06-da67c25a0e39 (test-1) is still in progress 2026-04-05 01:31:20.661082 | orchestrator | 2026-04-05 01:31:20 | INFO  | Live migration of 09032338-5e97-4fde-ad06-da67c25a0e39 (test-1) is still in progress 2026-04-05 01:31:22.979560 | orchestrator | 2026-04-05 01:31:22 | INFO  | Live migration of 09032338-5e97-4fde-ad06-da67c25a0e39 (test-1) is still in progress 2026-04-05 01:31:25.286689 | orchestrator | 2026-04-05 01:31:25 | INFO  | Live migration of 09032338-5e97-4fde-ad06-da67c25a0e39 (test-1) completed with status ACTIVE 2026-04-05 01:31:25.286800 | orchestrator | 2026-04-05 01:31:25 | INFO  | Live migrating server 5d0b6991-2fb9-4f5e-9c24-c349d2c523cd 2026-04-05 01:31:37.104657 | orchestrator | 2026-04-05 01:31:37 | INFO  | Live migration of 5d0b6991-2fb9-4f5e-9c24-c349d2c523cd (test) is still in progress 2026-04-05 01:31:39.461686 | orchestrator | 2026-04-05 01:31:39 | INFO  | Live migration of 5d0b6991-2fb9-4f5e-9c24-c349d2c523cd (test) is still in progress 2026-04-05 01:31:41.824014 | orchestrator | 2026-04-05 01:31:41 | INFO  | Live migration of 5d0b6991-2fb9-4f5e-9c24-c349d2c523cd (test) is still in progress 2026-04-05 01:31:44.262526 | orchestrator | 2026-04-05 01:31:44 | INFO  | Live migration of 5d0b6991-2fb9-4f5e-9c24-c349d2c523cd (test) is still in progress 2026-04-05 01:31:46.653892 | orchestrator | 2026-04-05 01:31:46 | INFO  | Live migration of 5d0b6991-2fb9-4f5e-9c24-c349d2c523cd (test) is still in progress 2026-04-05 01:31:49.020845 | orchestrator | 2026-04-05 01:31:49 | INFO  | Live migration of 5d0b6991-2fb9-4f5e-9c24-c349d2c523cd (test) is still in progress 2026-04-05 01:31:51.297100 | orchestrator | 2026-04-05 01:31:51 | INFO  | Live migration of 5d0b6991-2fb9-4f5e-9c24-c349d2c523cd (test) is still in progress 2026-04-05 01:31:53.585203 | orchestrator | 2026-04-05 01:31:53 | INFO  | Live migration of 5d0b6991-2fb9-4f5e-9c24-c349d2c523cd (test) is still in progress 2026-04-05 01:31:55.915057 | orchestrator | 2026-04-05 01:31:55 | INFO  | Live migration of 5d0b6991-2fb9-4f5e-9c24-c349d2c523cd (test) is still in progress 2026-04-05 01:31:58.249745 | orchestrator | 2026-04-05 01:31:58 | INFO  | Live migration of 5d0b6991-2fb9-4f5e-9c24-c349d2c523cd (test) is still in progress 2026-04-05 01:32:00.615571 | orchestrator | 2026-04-05 01:32:00 | INFO  | Live migration of 5d0b6991-2fb9-4f5e-9c24-c349d2c523cd (test) completed with status ACTIVE 2026-04-05 01:32:00.969690 | orchestrator | + compute_list 2026-04-05 01:32:00.969809 | orchestrator | + osism manage compute list testbed-node-3 2026-04-05 01:32:02.605656 | orchestrator | 2026-04-05 01:32:02 | ERROR  | Unable to get ansible vault password 2026-04-05 01:32:02.605762 | orchestrator | 2026-04-05 01:32:02 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-04-05 01:32:02.605778 | orchestrator | 2026-04-05 01:32:02 | ERROR  | Dropping encrypted entries 2026-04-05 01:32:04.172842 | orchestrator | +--------------------------------------+--------+----------+ 2026-04-05 01:32:04.172923 | orchestrator | | ID | Name | Status | 2026-04-05 01:32:04.172934 | orchestrator | |--------------------------------------+--------+----------| 2026-04-05 01:32:04.172967 | orchestrator | | e707db68-9716-42ed-ab71-c8afc05b7d2e | test-3 | ACTIVE | 2026-04-05 01:32:04.172976 | orchestrator | | 09032338-5e97-4fde-ad06-da67c25a0e39 | test-1 | ACTIVE | 2026-04-05 01:32:04.172985 | orchestrator | | 5d0b6991-2fb9-4f5e-9c24-c349d2c523cd | test | ACTIVE | 2026-04-05 01:32:04.172993 | orchestrator | +--------------------------------------+--------+----------+ 2026-04-05 01:32:04.512150 | orchestrator | + osism manage compute list testbed-node-4 2026-04-05 01:32:06.168695 | orchestrator | 2026-04-05 01:32:06 | ERROR  | Unable to get ansible vault password 2026-04-05 01:32:06.168799 | orchestrator | 2026-04-05 01:32:06 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-04-05 01:32:06.168816 | orchestrator | 2026-04-05 01:32:06 | ERROR  | Dropping encrypted entries 2026-04-05 01:32:07.362723 | orchestrator | +------+--------+----------+ 2026-04-05 01:32:07.362841 | orchestrator | | ID | Name | Status | 2026-04-05 01:32:07.362865 | orchestrator | |------+--------+----------| 2026-04-05 01:32:07.362875 | orchestrator | +------+--------+----------+ 2026-04-05 01:32:07.753474 | orchestrator | + osism manage compute list testbed-node-5 2026-04-05 01:32:09.393588 | orchestrator | 2026-04-05 01:32:09 | ERROR  | Unable to get ansible vault password 2026-04-05 01:32:09.393670 | orchestrator | 2026-04-05 01:32:09 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-04-05 01:32:09.393681 | orchestrator | 2026-04-05 01:32:09 | ERROR  | Dropping encrypted entries 2026-04-05 01:32:10.977928 | orchestrator | +--------------------------------------+--------+----------+ 2026-04-05 01:32:10.978117 | orchestrator | | ID | Name | Status | 2026-04-05 01:32:10.978136 | orchestrator | |--------------------------------------+--------+----------| 2026-04-05 01:32:10.978148 | orchestrator | | df6a7122-009a-4906-ab7b-ffcfc5c9cde5 | test-4 | ACTIVE | 2026-04-05 01:32:10.978160 | orchestrator | | 42003049-8258-4648-9ccf-4b69d906ac3b | test-2 | ACTIVE | 2026-04-05 01:32:10.978172 | orchestrator | +--------------------------------------+--------+----------+ 2026-04-05 01:32:11.337439 | orchestrator | + server_ping 2026-04-05 01:32:11.338801 | orchestrator | ++ tr -d '\r' 2026-04-05 01:32:11.338929 | orchestrator | ++ openstack --os-cloud test floating ip list --status ACTIVE -f value -c 'Floating IP Address' 2026-04-05 01:32:14.220180 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-04-05 01:32:14.220279 | orchestrator | + ping -c3 192.168.112.102 2026-04-05 01:32:14.229661 | orchestrator | PING 192.168.112.102 (192.168.112.102) 56(84) bytes of data. 2026-04-05 01:32:14.229706 | orchestrator | 64 bytes from 192.168.112.102: icmp_seq=1 ttl=63 time=8.32 ms 2026-04-05 01:32:15.225422 | orchestrator | 64 bytes from 192.168.112.102: icmp_seq=2 ttl=63 time=2.20 ms 2026-04-05 01:32:16.226920 | orchestrator | 64 bytes from 192.168.112.102: icmp_seq=3 ttl=63 time=1.52 ms 2026-04-05 01:32:16.227008 | orchestrator | 2026-04-05 01:32:16.227022 | orchestrator | --- 192.168.112.102 ping statistics --- 2026-04-05 01:32:16.227033 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-04-05 01:32:16.227042 | orchestrator | rtt min/avg/max/mdev = 1.519/4.015/8.323/3.058 ms 2026-04-05 01:32:16.227052 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-04-05 01:32:16.227062 | orchestrator | + ping -c3 192.168.112.116 2026-04-05 01:32:16.238931 | orchestrator | PING 192.168.112.116 (192.168.112.116) 56(84) bytes of data. 2026-04-05 01:32:16.239035 | orchestrator | 64 bytes from 192.168.112.116: icmp_seq=1 ttl=63 time=6.90 ms 2026-04-05 01:32:17.235583 | orchestrator | 64 bytes from 192.168.112.116: icmp_seq=2 ttl=63 time=2.34 ms 2026-04-05 01:32:18.237128 | orchestrator | 64 bytes from 192.168.112.116: icmp_seq=3 ttl=63 time=1.79 ms 2026-04-05 01:32:18.237618 | orchestrator | 2026-04-05 01:32:18.237666 | orchestrator | --- 192.168.112.116 ping statistics --- 2026-04-05 01:32:18.237689 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-04-05 01:32:18.237709 | orchestrator | rtt min/avg/max/mdev = 1.790/3.674/6.896/2.289 ms 2026-04-05 01:32:18.237783 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-04-05 01:32:18.237806 | orchestrator | + ping -c3 192.168.112.111 2026-04-05 01:32:18.250189 | orchestrator | PING 192.168.112.111 (192.168.112.111) 56(84) bytes of data. 2026-04-05 01:32:18.250261 | orchestrator | 64 bytes from 192.168.112.111: icmp_seq=1 ttl=63 time=7.56 ms 2026-04-05 01:32:19.248992 | orchestrator | 64 bytes from 192.168.112.111: icmp_seq=2 ttl=63 time=4.30 ms 2026-04-05 01:32:20.247894 | orchestrator | 64 bytes from 192.168.112.111: icmp_seq=3 ttl=63 time=1.46 ms 2026-04-05 01:32:20.247998 | orchestrator | 2026-04-05 01:32:20.248015 | orchestrator | --- 192.168.112.111 ping statistics --- 2026-04-05 01:32:20.248028 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2004ms 2026-04-05 01:32:20.248040 | orchestrator | rtt min/avg/max/mdev = 1.462/4.442/7.561/2.491 ms 2026-04-05 01:32:20.249593 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-04-05 01:32:20.249625 | orchestrator | + ping -c3 192.168.112.128 2026-04-05 01:32:20.261533 | orchestrator | PING 192.168.112.128 (192.168.112.128) 56(84) bytes of data. 2026-04-05 01:32:20.261624 | orchestrator | 64 bytes from 192.168.112.128: icmp_seq=1 ttl=63 time=7.69 ms 2026-04-05 01:32:21.259216 | orchestrator | 64 bytes from 192.168.112.128: icmp_seq=2 ttl=63 time=3.11 ms 2026-04-05 01:32:22.259155 | orchestrator | 64 bytes from 192.168.112.128: icmp_seq=3 ttl=63 time=1.70 ms 2026-04-05 01:32:22.259263 | orchestrator | 2026-04-05 01:32:22.259279 | orchestrator | --- 192.168.112.128 ping statistics --- 2026-04-05 01:32:22.259322 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-04-05 01:32:22.259334 | orchestrator | rtt min/avg/max/mdev = 1.702/4.169/7.692/2.556 ms 2026-04-05 01:32:22.259594 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-04-05 01:32:22.259619 | orchestrator | + ping -c3 192.168.112.193 2026-04-05 01:32:22.273198 | orchestrator | PING 192.168.112.193 (192.168.112.193) 56(84) bytes of data. 2026-04-05 01:32:22.273334 | orchestrator | 64 bytes from 192.168.112.193: icmp_seq=1 ttl=63 time=8.25 ms 2026-04-05 01:32:23.269027 | orchestrator | 64 bytes from 192.168.112.193: icmp_seq=2 ttl=63 time=2.36 ms 2026-04-05 01:32:24.270998 | orchestrator | 64 bytes from 192.168.112.193: icmp_seq=3 ttl=63 time=1.99 ms 2026-04-05 01:32:24.271077 | orchestrator | 2026-04-05 01:32:24.271087 | orchestrator | --- 192.168.112.193 ping statistics --- 2026-04-05 01:32:24.271096 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-04-05 01:32:24.271103 | orchestrator | rtt min/avg/max/mdev = 1.986/4.197/8.249/2.868 ms 2026-04-05 01:32:24.271746 | orchestrator | + osism manage compute migrate --yes --target testbed-node-3 testbed-node-5 2026-04-05 01:32:25.926611 | orchestrator | 2026-04-05 01:32:25 | ERROR  | Unable to get ansible vault password 2026-04-05 01:32:25.926727 | orchestrator | 2026-04-05 01:32:25 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-04-05 01:32:25.926749 | orchestrator | 2026-04-05 01:32:25 | ERROR  | Dropping encrypted entries 2026-04-05 01:32:27.936926 | orchestrator | 2026-04-05 01:32:27 | INFO  | Live migrating server df6a7122-009a-4906-ab7b-ffcfc5c9cde5 2026-04-05 01:32:40.363908 | orchestrator | 2026-04-05 01:32:40 | INFO  | Live migration of df6a7122-009a-4906-ab7b-ffcfc5c9cde5 (test-4) is still in progress 2026-04-05 01:32:42.725355 | orchestrator | 2026-04-05 01:32:42 | INFO  | Live migration of df6a7122-009a-4906-ab7b-ffcfc5c9cde5 (test-4) is still in progress 2026-04-05 01:32:45.091573 | orchestrator | 2026-04-05 01:32:45 | INFO  | Live migration of df6a7122-009a-4906-ab7b-ffcfc5c9cde5 (test-4) is still in progress 2026-04-05 01:32:47.436895 | orchestrator | 2026-04-05 01:32:47 | INFO  | Live migration of df6a7122-009a-4906-ab7b-ffcfc5c9cde5 (test-4) is still in progress 2026-04-05 01:32:49.862540 | orchestrator | 2026-04-05 01:32:49 | INFO  | Live migration of df6a7122-009a-4906-ab7b-ffcfc5c9cde5 (test-4) is still in progress 2026-04-05 01:32:52.264905 | orchestrator | 2026-04-05 01:32:52 | INFO  | Live migration of df6a7122-009a-4906-ab7b-ffcfc5c9cde5 (test-4) is still in progress 2026-04-05 01:32:54.829524 | orchestrator | 2026-04-05 01:32:54 | INFO  | Live migration of df6a7122-009a-4906-ab7b-ffcfc5c9cde5 (test-4) is still in progress 2026-04-05 01:32:57.177836 | orchestrator | 2026-04-05 01:32:57 | INFO  | Live migration of df6a7122-009a-4906-ab7b-ffcfc5c9cde5 (test-4) is still in progress 2026-04-05 01:32:59.692428 | orchestrator | 2026-04-05 01:32:59 | INFO  | Live migration of df6a7122-009a-4906-ab7b-ffcfc5c9cde5 (test-4) is still in progress 2026-04-05 01:33:02.036738 | orchestrator | 2026-04-05 01:33:02 | INFO  | Live migration of df6a7122-009a-4906-ab7b-ffcfc5c9cde5 (test-4) completed with status ACTIVE 2026-04-05 01:33:02.036840 | orchestrator | 2026-04-05 01:33:02 | INFO  | Live migrating server 42003049-8258-4648-9ccf-4b69d906ac3b 2026-04-05 01:33:13.273549 | orchestrator | 2026-04-05 01:33:13 | INFO  | Live migration of 42003049-8258-4648-9ccf-4b69d906ac3b (test-2) is still in progress 2026-04-05 01:33:15.643165 | orchestrator | 2026-04-05 01:33:15 | INFO  | Live migration of 42003049-8258-4648-9ccf-4b69d906ac3b (test-2) is still in progress 2026-04-05 01:33:18.009932 | orchestrator | 2026-04-05 01:33:18 | INFO  | Live migration of 42003049-8258-4648-9ccf-4b69d906ac3b (test-2) is still in progress 2026-04-05 01:33:20.430685 | orchestrator | 2026-04-05 01:33:20 | INFO  | Live migration of 42003049-8258-4648-9ccf-4b69d906ac3b (test-2) is still in progress 2026-04-05 01:33:22.743458 | orchestrator | 2026-04-05 01:33:22 | INFO  | Live migration of 42003049-8258-4648-9ccf-4b69d906ac3b (test-2) is still in progress 2026-04-05 01:33:25.134480 | orchestrator | 2026-04-05 01:33:25 | INFO  | Live migration of 42003049-8258-4648-9ccf-4b69d906ac3b (test-2) is still in progress 2026-04-05 01:33:27.423302 | orchestrator | 2026-04-05 01:33:27 | INFO  | Live migration of 42003049-8258-4648-9ccf-4b69d906ac3b (test-2) is still in progress 2026-04-05 01:33:29.722593 | orchestrator | 2026-04-05 01:33:29 | INFO  | Live migration of 42003049-8258-4648-9ccf-4b69d906ac3b (test-2) is still in progress 2026-04-05 01:33:32.063875 | orchestrator | 2026-04-05 01:33:32 | INFO  | Live migration of 42003049-8258-4648-9ccf-4b69d906ac3b (test-2) completed with status ACTIVE 2026-04-05 01:33:32.413712 | orchestrator | + compute_list 2026-04-05 01:33:32.413808 | orchestrator | + osism manage compute list testbed-node-3 2026-04-05 01:33:34.107242 | orchestrator | 2026-04-05 01:33:34 | ERROR  | Unable to get ansible vault password 2026-04-05 01:33:34.107381 | orchestrator | 2026-04-05 01:33:34 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-04-05 01:33:34.107401 | orchestrator | 2026-04-05 01:33:34 | ERROR  | Dropping encrypted entries 2026-04-05 01:33:35.798407 | orchestrator | +--------------------------------------+--------+----------+ 2026-04-05 01:33:35.798503 | orchestrator | | ID | Name | Status | 2026-04-05 01:33:35.798516 | orchestrator | |--------------------------------------+--------+----------| 2026-04-05 01:33:35.798526 | orchestrator | | e707db68-9716-42ed-ab71-c8afc05b7d2e | test-3 | ACTIVE | 2026-04-05 01:33:35.798536 | orchestrator | | df6a7122-009a-4906-ab7b-ffcfc5c9cde5 | test-4 | ACTIVE | 2026-04-05 01:33:35.798547 | orchestrator | | 09032338-5e97-4fde-ad06-da67c25a0e39 | test-1 | ACTIVE | 2026-04-05 01:33:35.798557 | orchestrator | | 42003049-8258-4648-9ccf-4b69d906ac3b | test-2 | ACTIVE | 2026-04-05 01:33:35.798567 | orchestrator | | 5d0b6991-2fb9-4f5e-9c24-c349d2c523cd | test | ACTIVE | 2026-04-05 01:33:35.798577 | orchestrator | +--------------------------------------+--------+----------+ 2026-04-05 01:33:36.219915 | orchestrator | + osism manage compute list testbed-node-4 2026-04-05 01:33:37.836610 | orchestrator | 2026-04-05 01:33:37 | ERROR  | Unable to get ansible vault password 2026-04-05 01:33:37.836707 | orchestrator | 2026-04-05 01:33:37 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-04-05 01:33:37.836715 | orchestrator | 2026-04-05 01:33:37 | ERROR  | Dropping encrypted entries 2026-04-05 01:33:39.016964 | orchestrator | +------+--------+----------+ 2026-04-05 01:33:39.017056 | orchestrator | | ID | Name | Status | 2026-04-05 01:33:39.017068 | orchestrator | |------+--------+----------| 2026-04-05 01:33:39.017078 | orchestrator | +------+--------+----------+ 2026-04-05 01:33:39.371195 | orchestrator | + osism manage compute list testbed-node-5 2026-04-05 01:33:41.127237 | orchestrator | 2026-04-05 01:33:41 | ERROR  | Unable to get ansible vault password 2026-04-05 01:33:41.127391 | orchestrator | 2026-04-05 01:33:41 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-04-05 01:33:41.127412 | orchestrator | 2026-04-05 01:33:41 | ERROR  | Dropping encrypted entries 2026-04-05 01:33:42.291615 | orchestrator | +------+--------+----------+ 2026-04-05 01:33:42.291695 | orchestrator | | ID | Name | Status | 2026-04-05 01:33:42.291704 | orchestrator | |------+--------+----------| 2026-04-05 01:33:42.291711 | orchestrator | +------+--------+----------+ 2026-04-05 01:33:42.638507 | orchestrator | + server_ping 2026-04-05 01:33:42.638755 | orchestrator | ++ openstack --os-cloud test floating ip list --status ACTIVE -f value -c 'Floating IP Address' 2026-04-05 01:33:42.638778 | orchestrator | ++ tr -d '\r' 2026-04-05 01:33:45.551880 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-04-05 01:33:45.552250 | orchestrator | + ping -c3 192.168.112.102 2026-04-05 01:33:45.564780 | orchestrator | PING 192.168.112.102 (192.168.112.102) 56(84) bytes of data. 2026-04-05 01:33:45.564869 | orchestrator | 64 bytes from 192.168.112.102: icmp_seq=1 ttl=63 time=10.2 ms 2026-04-05 01:33:46.558222 | orchestrator | 64 bytes from 192.168.112.102: icmp_seq=2 ttl=63 time=1.98 ms 2026-04-05 01:33:47.559585 | orchestrator | 64 bytes from 192.168.112.102: icmp_seq=3 ttl=63 time=1.73 ms 2026-04-05 01:33:47.559684 | orchestrator | 2026-04-05 01:33:47.559697 | orchestrator | --- 192.168.112.102 ping statistics --- 2026-04-05 01:33:47.559707 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-04-05 01:33:47.559716 | orchestrator | rtt min/avg/max/mdev = 1.731/4.638/10.204/3.937 ms 2026-04-05 01:33:47.559726 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-04-05 01:33:47.559734 | orchestrator | + ping -c3 192.168.112.116 2026-04-05 01:33:47.571979 | orchestrator | PING 192.168.112.116 (192.168.112.116) 56(84) bytes of data. 2026-04-05 01:33:47.572074 | orchestrator | 64 bytes from 192.168.112.116: icmp_seq=1 ttl=63 time=7.77 ms 2026-04-05 01:33:48.568355 | orchestrator | 64 bytes from 192.168.112.116: icmp_seq=2 ttl=63 time=1.81 ms 2026-04-05 01:33:49.569639 | orchestrator | 64 bytes from 192.168.112.116: icmp_seq=3 ttl=63 time=1.68 ms 2026-04-05 01:33:49.569777 | orchestrator | 2026-04-05 01:33:49.569794 | orchestrator | --- 192.168.112.116 ping statistics --- 2026-04-05 01:33:49.569809 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-04-05 01:33:49.569821 | orchestrator | rtt min/avg/max/mdev = 1.680/3.752/7.771/2.841 ms 2026-04-05 01:33:49.570662 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-04-05 01:33:49.570701 | orchestrator | + ping -c3 192.168.112.111 2026-04-05 01:33:49.585071 | orchestrator | PING 192.168.112.111 (192.168.112.111) 56(84) bytes of data. 2026-04-05 01:33:49.585175 | orchestrator | 64 bytes from 192.168.112.111: icmp_seq=1 ttl=63 time=9.47 ms 2026-04-05 01:33:50.579900 | orchestrator | 64 bytes from 192.168.112.111: icmp_seq=2 ttl=63 time=2.26 ms 2026-04-05 01:33:51.581058 | orchestrator | 64 bytes from 192.168.112.111: icmp_seq=3 ttl=63 time=2.20 ms 2026-04-05 01:33:51.581139 | orchestrator | 2026-04-05 01:33:51.581147 | orchestrator | --- 192.168.112.111 ping statistics --- 2026-04-05 01:33:51.581155 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-04-05 01:33:51.581182 | orchestrator | rtt min/avg/max/mdev = 2.195/4.644/9.474/3.415 ms 2026-04-05 01:33:51.581720 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-04-05 01:33:51.581822 | orchestrator | + ping -c3 192.168.112.128 2026-04-05 01:33:51.595254 | orchestrator | PING 192.168.112.128 (192.168.112.128) 56(84) bytes of data. 2026-04-05 01:33:51.595366 | orchestrator | 64 bytes from 192.168.112.128: icmp_seq=1 ttl=63 time=8.17 ms 2026-04-05 01:33:52.591206 | orchestrator | 64 bytes from 192.168.112.128: icmp_seq=2 ttl=63 time=2.54 ms 2026-04-05 01:33:53.593543 | orchestrator | 64 bytes from 192.168.112.128: icmp_seq=3 ttl=63 time=2.28 ms 2026-04-05 01:33:53.593631 | orchestrator | 2026-04-05 01:33:53.593641 | orchestrator | --- 192.168.112.128 ping statistics --- 2026-04-05 01:33:53.593649 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-04-05 01:33:53.593655 | orchestrator | rtt min/avg/max/mdev = 2.283/4.329/8.166/2.714 ms 2026-04-05 01:33:53.593662 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-04-05 01:33:53.593669 | orchestrator | + ping -c3 192.168.112.193 2026-04-05 01:33:53.607062 | orchestrator | PING 192.168.112.193 (192.168.112.193) 56(84) bytes of data. 2026-04-05 01:33:53.607155 | orchestrator | 64 bytes from 192.168.112.193: icmp_seq=1 ttl=63 time=8.31 ms 2026-04-05 01:33:54.603233 | orchestrator | 64 bytes from 192.168.112.193: icmp_seq=2 ttl=63 time=2.74 ms 2026-04-05 01:33:55.604903 | orchestrator | 64 bytes from 192.168.112.193: icmp_seq=3 ttl=63 time=2.22 ms 2026-04-05 01:33:55.605005 | orchestrator | 2026-04-05 01:33:55.605018 | orchestrator | --- 192.168.112.193 ping statistics --- 2026-04-05 01:33:55.605029 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2004ms 2026-04-05 01:33:55.605037 | orchestrator | rtt min/avg/max/mdev = 2.221/4.421/8.307/2.755 ms 2026-04-05 01:33:55.605594 | orchestrator | + osism manage compute migrate --yes --target testbed-node-4 testbed-node-3 2026-04-05 01:33:57.254496 | orchestrator | 2026-04-05 01:33:57 | ERROR  | Unable to get ansible vault password 2026-04-05 01:33:57.254645 | orchestrator | 2026-04-05 01:33:57 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-04-05 01:33:57.254667 | orchestrator | 2026-04-05 01:33:57 | ERROR  | Dropping encrypted entries 2026-04-05 01:33:58.888657 | orchestrator | 2026-04-05 01:33:58 | INFO  | Live migrating server e707db68-9716-42ed-ab71-c8afc05b7d2e 2026-04-05 01:34:11.771166 | orchestrator | 2026-04-05 01:34:11 | INFO  | Live migration of e707db68-9716-42ed-ab71-c8afc05b7d2e (test-3) is still in progress 2026-04-05 01:34:14.141426 | orchestrator | 2026-04-05 01:34:14 | INFO  | Live migration of e707db68-9716-42ed-ab71-c8afc05b7d2e (test-3) is still in progress 2026-04-05 01:34:16.506312 | orchestrator | 2026-04-05 01:34:16 | INFO  | Live migration of e707db68-9716-42ed-ab71-c8afc05b7d2e (test-3) is still in progress 2026-04-05 01:34:18.908422 | orchestrator | 2026-04-05 01:34:18 | INFO  | Live migration of e707db68-9716-42ed-ab71-c8afc05b7d2e (test-3) is still in progress 2026-04-05 01:34:21.281119 | orchestrator | 2026-04-05 01:34:21 | INFO  | Live migration of e707db68-9716-42ed-ab71-c8afc05b7d2e (test-3) is still in progress 2026-04-05 01:34:23.591777 | orchestrator | 2026-04-05 01:34:23 | INFO  | Live migration of e707db68-9716-42ed-ab71-c8afc05b7d2e (test-3) is still in progress 2026-04-05 01:34:25.870295 | orchestrator | 2026-04-05 01:34:25 | INFO  | Live migration of e707db68-9716-42ed-ab71-c8afc05b7d2e (test-3) is still in progress 2026-04-05 01:34:28.198497 | orchestrator | 2026-04-05 01:34:28 | INFO  | Live migration of e707db68-9716-42ed-ab71-c8afc05b7d2e (test-3) is still in progress 2026-04-05 01:34:30.565541 | orchestrator | 2026-04-05 01:34:30 | INFO  | Live migration of e707db68-9716-42ed-ab71-c8afc05b7d2e (test-3) completed with status ACTIVE 2026-04-05 01:34:30.565619 | orchestrator | 2026-04-05 01:34:30 | INFO  | Live migrating server df6a7122-009a-4906-ab7b-ffcfc5c9cde5 2026-04-05 01:34:41.648674 | orchestrator | 2026-04-05 01:34:41 | INFO  | Live migration of df6a7122-009a-4906-ab7b-ffcfc5c9cde5 (test-4) is still in progress 2026-04-05 01:34:44.012275 | orchestrator | 2026-04-05 01:34:44 | INFO  | Live migration of df6a7122-009a-4906-ab7b-ffcfc5c9cde5 (test-4) is still in progress 2026-04-05 01:34:46.402993 | orchestrator | 2026-04-05 01:34:46 | INFO  | Live migration of df6a7122-009a-4906-ab7b-ffcfc5c9cde5 (test-4) is still in progress 2026-04-05 01:34:48.703013 | orchestrator | 2026-04-05 01:34:48 | INFO  | Live migration of df6a7122-009a-4906-ab7b-ffcfc5c9cde5 (test-4) is still in progress 2026-04-05 01:34:51.067106 | orchestrator | 2026-04-05 01:34:51 | INFO  | Live migration of df6a7122-009a-4906-ab7b-ffcfc5c9cde5 (test-4) is still in progress 2026-04-05 01:34:53.329000 | orchestrator | 2026-04-05 01:34:53 | INFO  | Live migration of df6a7122-009a-4906-ab7b-ffcfc5c9cde5 (test-4) is still in progress 2026-04-05 01:34:55.626883 | orchestrator | 2026-04-05 01:34:55 | INFO  | Live migration of df6a7122-009a-4906-ab7b-ffcfc5c9cde5 (test-4) is still in progress 2026-04-05 01:34:57.899709 | orchestrator | 2026-04-05 01:34:57 | INFO  | Live migration of df6a7122-009a-4906-ab7b-ffcfc5c9cde5 (test-4) is still in progress 2026-04-05 01:35:00.258124 | orchestrator | 2026-04-05 01:35:00 | INFO  | Live migration of df6a7122-009a-4906-ab7b-ffcfc5c9cde5 (test-4) completed with status ACTIVE 2026-04-05 01:35:00.258210 | orchestrator | 2026-04-05 01:35:00 | INFO  | Live migrating server 09032338-5e97-4fde-ad06-da67c25a0e39 2026-04-05 01:35:10.658361 | orchestrator | 2026-04-05 01:35:10 | INFO  | Live migration of 09032338-5e97-4fde-ad06-da67c25a0e39 (test-1) is still in progress 2026-04-05 01:35:13.078465 | orchestrator | 2026-04-05 01:35:13 | INFO  | Live migration of 09032338-5e97-4fde-ad06-da67c25a0e39 (test-1) is still in progress 2026-04-05 01:35:15.396005 | orchestrator | 2026-04-05 01:35:15 | INFO  | Live migration of 09032338-5e97-4fde-ad06-da67c25a0e39 (test-1) is still in progress 2026-04-05 01:35:17.674001 | orchestrator | 2026-04-05 01:35:17 | INFO  | Live migration of 09032338-5e97-4fde-ad06-da67c25a0e39 (test-1) is still in progress 2026-04-05 01:35:19.987054 | orchestrator | 2026-04-05 01:35:19 | INFO  | Live migration of 09032338-5e97-4fde-ad06-da67c25a0e39 (test-1) is still in progress 2026-04-05 01:35:22.313305 | orchestrator | 2026-04-05 01:35:22 | INFO  | Live migration of 09032338-5e97-4fde-ad06-da67c25a0e39 (test-1) is still in progress 2026-04-05 01:35:24.652976 | orchestrator | 2026-04-05 01:35:24 | INFO  | Live migration of 09032338-5e97-4fde-ad06-da67c25a0e39 (test-1) is still in progress 2026-04-05 01:35:26.960261 | orchestrator | 2026-04-05 01:35:26 | INFO  | Live migration of 09032338-5e97-4fde-ad06-da67c25a0e39 (test-1) is still in progress 2026-04-05 01:35:29.256933 | orchestrator | 2026-04-05 01:35:29 | INFO  | Live migration of 09032338-5e97-4fde-ad06-da67c25a0e39 (test-1) completed with status ACTIVE 2026-04-05 01:35:29.257026 | orchestrator | 2026-04-05 01:35:29 | INFO  | Live migrating server 42003049-8258-4648-9ccf-4b69d906ac3b 2026-04-05 01:35:40.987245 | orchestrator | 2026-04-05 01:35:40 | INFO  | Live migration of 42003049-8258-4648-9ccf-4b69d906ac3b (test-2) is still in progress 2026-04-05 01:35:43.356849 | orchestrator | 2026-04-05 01:35:43 | INFO  | Live migration of 42003049-8258-4648-9ccf-4b69d906ac3b (test-2) is still in progress 2026-04-05 01:35:45.771953 | orchestrator | 2026-04-05 01:35:45 | INFO  | Live migration of 42003049-8258-4648-9ccf-4b69d906ac3b (test-2) is still in progress 2026-04-05 01:35:48.169230 | orchestrator | 2026-04-05 01:35:48 | INFO  | Live migration of 42003049-8258-4648-9ccf-4b69d906ac3b (test-2) is still in progress 2026-04-05 01:35:50.557339 | orchestrator | 2026-04-05 01:35:50 | INFO  | Live migration of 42003049-8258-4648-9ccf-4b69d906ac3b (test-2) is still in progress 2026-04-05 01:35:52.842471 | orchestrator | 2026-04-05 01:35:52 | INFO  | Live migration of 42003049-8258-4648-9ccf-4b69d906ac3b (test-2) is still in progress 2026-04-05 01:35:55.191031 | orchestrator | 2026-04-05 01:35:55 | INFO  | Live migration of 42003049-8258-4648-9ccf-4b69d906ac3b (test-2) is still in progress 2026-04-05 01:35:57.457998 | orchestrator | 2026-04-05 01:35:57 | INFO  | Live migration of 42003049-8258-4648-9ccf-4b69d906ac3b (test-2) is still in progress 2026-04-05 01:35:59.850871 | orchestrator | 2026-04-05 01:35:59 | INFO  | Live migration of 42003049-8258-4648-9ccf-4b69d906ac3b (test-2) is still in progress 2026-04-05 01:36:02.265635 | orchestrator | 2026-04-05 01:36:02 | INFO  | Live migration of 42003049-8258-4648-9ccf-4b69d906ac3b (test-2) completed with status ACTIVE 2026-04-05 01:36:02.265711 | orchestrator | 2026-04-05 01:36:02 | INFO  | Live migrating server 5d0b6991-2fb9-4f5e-9c24-c349d2c523cd 2026-04-05 01:36:12.983472 | orchestrator | 2026-04-05 01:36:12 | INFO  | Live migration of 5d0b6991-2fb9-4f5e-9c24-c349d2c523cd (test) is still in progress 2026-04-05 01:36:15.339367 | orchestrator | 2026-04-05 01:36:15 | INFO  | Live migration of 5d0b6991-2fb9-4f5e-9c24-c349d2c523cd (test) is still in progress 2026-04-05 01:36:17.736658 | orchestrator | 2026-04-05 01:36:17 | INFO  | Live migration of 5d0b6991-2fb9-4f5e-9c24-c349d2c523cd (test) is still in progress 2026-04-05 01:36:20.171099 | orchestrator | 2026-04-05 01:36:20 | INFO  | Live migration of 5d0b6991-2fb9-4f5e-9c24-c349d2c523cd (test) is still in progress 2026-04-05 01:36:22.478505 | orchestrator | 2026-04-05 01:36:22 | INFO  | Live migration of 5d0b6991-2fb9-4f5e-9c24-c349d2c523cd (test) is still in progress 2026-04-05 01:36:24.868707 | orchestrator | 2026-04-05 01:36:24 | INFO  | Live migration of 5d0b6991-2fb9-4f5e-9c24-c349d2c523cd (test) is still in progress 2026-04-05 01:36:27.164934 | orchestrator | 2026-04-05 01:36:27 | INFO  | Live migration of 5d0b6991-2fb9-4f5e-9c24-c349d2c523cd (test) is still in progress 2026-04-05 01:36:29.476224 | orchestrator | 2026-04-05 01:36:29 | INFO  | Live migration of 5d0b6991-2fb9-4f5e-9c24-c349d2c523cd (test) is still in progress 2026-04-05 01:36:31.809948 | orchestrator | 2026-04-05 01:36:31 | INFO  | Live migration of 5d0b6991-2fb9-4f5e-9c24-c349d2c523cd (test) is still in progress 2026-04-05 01:36:34.143561 | orchestrator | 2026-04-05 01:36:34 | INFO  | Live migration of 5d0b6991-2fb9-4f5e-9c24-c349d2c523cd (test) completed with status ACTIVE 2026-04-05 01:36:34.493634 | orchestrator | + compute_list 2026-04-05 01:36:34.493733 | orchestrator | + osism manage compute list testbed-node-3 2026-04-05 01:36:36.094657 | orchestrator | 2026-04-05 01:36:36 | ERROR  | Unable to get ansible vault password 2026-04-05 01:36:36.094754 | orchestrator | 2026-04-05 01:36:36 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-04-05 01:36:36.094772 | orchestrator | 2026-04-05 01:36:36 | ERROR  | Dropping encrypted entries 2026-04-05 01:36:37.379983 | orchestrator | +------+--------+----------+ 2026-04-05 01:36:37.380099 | orchestrator | | ID | Name | Status | 2026-04-05 01:36:37.380123 | orchestrator | |------+--------+----------| 2026-04-05 01:36:37.380143 | orchestrator | +------+--------+----------+ 2026-04-05 01:36:37.722702 | orchestrator | + osism manage compute list testbed-node-4 2026-04-05 01:36:39.337028 | orchestrator | 2026-04-05 01:36:39 | ERROR  | Unable to get ansible vault password 2026-04-05 01:36:39.337157 | orchestrator | 2026-04-05 01:36:39 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-04-05 01:36:39.337220 | orchestrator | 2026-04-05 01:36:39 | ERROR  | Dropping encrypted entries 2026-04-05 01:36:40.940100 | orchestrator | +--------------------------------------+--------+----------+ 2026-04-05 01:36:40.940205 | orchestrator | | ID | Name | Status | 2026-04-05 01:36:40.940226 | orchestrator | |--------------------------------------+--------+----------| 2026-04-05 01:36:40.940244 | orchestrator | | e707db68-9716-42ed-ab71-c8afc05b7d2e | test-3 | ACTIVE | 2026-04-05 01:36:40.940263 | orchestrator | | df6a7122-009a-4906-ab7b-ffcfc5c9cde5 | test-4 | ACTIVE | 2026-04-05 01:36:40.940283 | orchestrator | | 09032338-5e97-4fde-ad06-da67c25a0e39 | test-1 | ACTIVE | 2026-04-05 01:36:40.940302 | orchestrator | | 42003049-8258-4648-9ccf-4b69d906ac3b | test-2 | ACTIVE | 2026-04-05 01:36:40.940320 | orchestrator | | 5d0b6991-2fb9-4f5e-9c24-c349d2c523cd | test | ACTIVE | 2026-04-05 01:36:40.940337 | orchestrator | +--------------------------------------+--------+----------+ 2026-04-05 01:36:41.311976 | orchestrator | + osism manage compute list testbed-node-5 2026-04-05 01:36:42.936778 | orchestrator | 2026-04-05 01:36:42 | ERROR  | Unable to get ansible vault password 2026-04-05 01:36:42.936913 | orchestrator | 2026-04-05 01:36:42 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-04-05 01:36:42.936932 | orchestrator | 2026-04-05 01:36:42 | ERROR  | Dropping encrypted entries 2026-04-05 01:36:44.079219 | orchestrator | +------+--------+----------+ 2026-04-05 01:36:44.079368 | orchestrator | | ID | Name | Status | 2026-04-05 01:36:44.079392 | orchestrator | |------+--------+----------| 2026-04-05 01:36:44.079410 | orchestrator | +------+--------+----------+ 2026-04-05 01:36:44.425849 | orchestrator | + server_ping 2026-04-05 01:36:44.426641 | orchestrator | ++ openstack --os-cloud test floating ip list --status ACTIVE -f value -c 'Floating IP Address' 2026-04-05 01:36:44.427678 | orchestrator | ++ tr -d '\r' 2026-04-05 01:36:47.645099 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-04-05 01:36:47.645255 | orchestrator | + ping -c3 192.168.112.102 2026-04-05 01:36:47.655556 | orchestrator | PING 192.168.112.102 (192.168.112.102) 56(84) bytes of data. 2026-04-05 01:36:47.655627 | orchestrator | 64 bytes from 192.168.112.102: icmp_seq=1 ttl=63 time=6.37 ms 2026-04-05 01:36:48.653893 | orchestrator | 64 bytes from 192.168.112.102: icmp_seq=2 ttl=63 time=2.49 ms 2026-04-05 01:36:49.655809 | orchestrator | 64 bytes from 192.168.112.102: icmp_seq=3 ttl=63 time=1.64 ms 2026-04-05 01:36:49.655929 | orchestrator | 2026-04-05 01:36:49.655949 | orchestrator | --- 192.168.112.102 ping statistics --- 2026-04-05 01:36:49.655963 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2004ms 2026-04-05 01:36:49.655975 | orchestrator | rtt min/avg/max/mdev = 1.643/3.500/6.373/2.060 ms 2026-04-05 01:36:49.655987 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-04-05 01:36:49.655999 | orchestrator | + ping -c3 192.168.112.116 2026-04-05 01:36:49.667971 | orchestrator | PING 192.168.112.116 (192.168.112.116) 56(84) bytes of data. 2026-04-05 01:36:49.668078 | orchestrator | 64 bytes from 192.168.112.116: icmp_seq=1 ttl=63 time=7.10 ms 2026-04-05 01:36:50.663658 | orchestrator | 64 bytes from 192.168.112.116: icmp_seq=2 ttl=63 time=2.10 ms 2026-04-05 01:36:51.664991 | orchestrator | 64 bytes from 192.168.112.116: icmp_seq=3 ttl=63 time=1.92 ms 2026-04-05 01:36:51.665144 | orchestrator | 2026-04-05 01:36:51.665172 | orchestrator | --- 192.168.112.116 ping statistics --- 2026-04-05 01:36:51.665193 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2002ms 2026-04-05 01:36:51.665213 | orchestrator | rtt min/avg/max/mdev = 1.920/3.705/7.100/2.401 ms 2026-04-05 01:36:51.665776 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-04-05 01:36:51.665818 | orchestrator | + ping -c3 192.168.112.111 2026-04-05 01:36:51.676987 | orchestrator | PING 192.168.112.111 (192.168.112.111) 56(84) bytes of data. 2026-04-05 01:36:51.677056 | orchestrator | 64 bytes from 192.168.112.111: icmp_seq=1 ttl=63 time=6.62 ms 2026-04-05 01:36:52.674976 | orchestrator | 64 bytes from 192.168.112.111: icmp_seq=2 ttl=63 time=2.19 ms 2026-04-05 01:36:53.676533 | orchestrator | 64 bytes from 192.168.112.111: icmp_seq=3 ttl=63 time=1.64 ms 2026-04-05 01:36:53.676632 | orchestrator | 2026-04-05 01:36:53.676654 | orchestrator | --- 192.168.112.111 ping statistics --- 2026-04-05 01:36:53.676672 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2004ms 2026-04-05 01:36:53.676688 | orchestrator | rtt min/avg/max/mdev = 1.642/3.484/6.618/2.227 ms 2026-04-05 01:36:53.676705 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-04-05 01:36:53.676722 | orchestrator | + ping -c3 192.168.112.128 2026-04-05 01:36:53.687513 | orchestrator | PING 192.168.112.128 (192.168.112.128) 56(84) bytes of data. 2026-04-05 01:36:53.687631 | orchestrator | 64 bytes from 192.168.112.128: icmp_seq=1 ttl=63 time=6.99 ms 2026-04-05 01:36:54.684304 | orchestrator | 64 bytes from 192.168.112.128: icmp_seq=2 ttl=63 time=2.43 ms 2026-04-05 01:36:55.685006 | orchestrator | 64 bytes from 192.168.112.128: icmp_seq=3 ttl=63 time=1.64 ms 2026-04-05 01:36:55.685104 | orchestrator | 2026-04-05 01:36:55.685120 | orchestrator | --- 192.168.112.128 ping statistics --- 2026-04-05 01:36:55.685133 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2002ms 2026-04-05 01:36:55.685145 | orchestrator | rtt min/avg/max/mdev = 1.641/3.686/6.990/2.358 ms 2026-04-05 01:36:55.685206 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-04-05 01:36:55.685385 | orchestrator | + ping -c3 192.168.112.193 2026-04-05 01:36:55.695398 | orchestrator | PING 192.168.112.193 (192.168.112.193) 56(84) bytes of data. 2026-04-05 01:36:55.695502 | orchestrator | 64 bytes from 192.168.112.193: icmp_seq=1 ttl=63 time=5.75 ms 2026-04-05 01:36:56.694507 | orchestrator | 64 bytes from 192.168.112.193: icmp_seq=2 ttl=63 time=2.58 ms 2026-04-05 01:36:57.695911 | orchestrator | 64 bytes from 192.168.112.193: icmp_seq=3 ttl=63 time=2.08 ms 2026-04-05 01:36:57.696037 | orchestrator | 2026-04-05 01:36:57.696066 | orchestrator | --- 192.168.112.193 ping statistics --- 2026-04-05 01:36:57.696087 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2004ms 2026-04-05 01:36:57.696104 | orchestrator | rtt min/avg/max/mdev = 2.079/3.467/5.745/1.623 ms 2026-04-05 01:36:57.697040 | orchestrator | + osism manage compute migrate --yes --target testbed-node-5 testbed-node-4 2026-04-05 01:36:59.367488 | orchestrator | 2026-04-05 01:36:59 | ERROR  | Unable to get ansible vault password 2026-04-05 01:36:59.367614 | orchestrator | 2026-04-05 01:36:59 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-04-05 01:36:59.367632 | orchestrator | 2026-04-05 01:36:59 | ERROR  | Dropping encrypted entries 2026-04-05 01:37:01.019867 | orchestrator | 2026-04-05 01:37:01 | INFO  | Live migrating server e707db68-9716-42ed-ab71-c8afc05b7d2e 2026-04-05 01:37:10.676391 | orchestrator | 2026-04-05 01:37:10 | INFO  | Live migration of e707db68-9716-42ed-ab71-c8afc05b7d2e (test-3) is still in progress 2026-04-05 01:37:13.066366 | orchestrator | 2026-04-05 01:37:13 | INFO  | Live migration of e707db68-9716-42ed-ab71-c8afc05b7d2e (test-3) is still in progress 2026-04-05 01:37:15.407148 | orchestrator | 2026-04-05 01:37:15 | INFO  | Live migration of e707db68-9716-42ed-ab71-c8afc05b7d2e (test-3) is still in progress 2026-04-05 01:37:17.711874 | orchestrator | 2026-04-05 01:37:17 | INFO  | Live migration of e707db68-9716-42ed-ab71-c8afc05b7d2e (test-3) is still in progress 2026-04-05 01:37:20.016158 | orchestrator | 2026-04-05 01:37:20 | INFO  | Live migration of e707db68-9716-42ed-ab71-c8afc05b7d2e (test-3) is still in progress 2026-04-05 01:37:22.327678 | orchestrator | 2026-04-05 01:37:22 | INFO  | Live migration of e707db68-9716-42ed-ab71-c8afc05b7d2e (test-3) is still in progress 2026-04-05 01:37:24.640351 | orchestrator | 2026-04-05 01:37:24 | INFO  | Live migration of e707db68-9716-42ed-ab71-c8afc05b7d2e (test-3) is still in progress 2026-04-05 01:37:26.960117 | orchestrator | 2026-04-05 01:37:26 | INFO  | Live migration of e707db68-9716-42ed-ab71-c8afc05b7d2e (test-3) is still in progress 2026-04-05 01:37:29.229746 | orchestrator | 2026-04-05 01:37:29 | INFO  | Live migration of e707db68-9716-42ed-ab71-c8afc05b7d2e (test-3) completed with status ACTIVE 2026-04-05 01:37:29.229826 | orchestrator | 2026-04-05 01:37:29 | INFO  | Live migrating server df6a7122-009a-4906-ab7b-ffcfc5c9cde5 2026-04-05 01:37:40.458420 | orchestrator | 2026-04-05 01:37:40 | INFO  | Live migration of df6a7122-009a-4906-ab7b-ffcfc5c9cde5 (test-4) is still in progress 2026-04-05 01:37:42.813186 | orchestrator | 2026-04-05 01:37:42 | INFO  | Live migration of df6a7122-009a-4906-ab7b-ffcfc5c9cde5 (test-4) is still in progress 2026-04-05 01:37:45.206267 | orchestrator | 2026-04-05 01:37:45 | INFO  | Live migration of df6a7122-009a-4906-ab7b-ffcfc5c9cde5 (test-4) is still in progress 2026-04-05 01:37:47.472140 | orchestrator | 2026-04-05 01:37:47 | INFO  | Live migration of df6a7122-009a-4906-ab7b-ffcfc5c9cde5 (test-4) is still in progress 2026-04-05 01:37:49.729811 | orchestrator | 2026-04-05 01:37:49 | INFO  | Live migration of df6a7122-009a-4906-ab7b-ffcfc5c9cde5 (test-4) is still in progress 2026-04-05 01:37:52.239293 | orchestrator | 2026-04-05 01:37:52 | INFO  | Live migration of df6a7122-009a-4906-ab7b-ffcfc5c9cde5 (test-4) is still in progress 2026-04-05 01:37:54.517628 | orchestrator | 2026-04-05 01:37:54 | INFO  | Live migration of df6a7122-009a-4906-ab7b-ffcfc5c9cde5 (test-4) is still in progress 2026-04-05 01:37:56.825696 | orchestrator | 2026-04-05 01:37:56 | INFO  | Live migration of df6a7122-009a-4906-ab7b-ffcfc5c9cde5 (test-4) is still in progress 2026-04-05 01:37:59.169550 | orchestrator | 2026-04-05 01:37:59 | INFO  | Live migration of df6a7122-009a-4906-ab7b-ffcfc5c9cde5 (test-4) completed with status ACTIVE 2026-04-05 01:37:59.169641 | orchestrator | 2026-04-05 01:37:59 | INFO  | Live migrating server 09032338-5e97-4fde-ad06-da67c25a0e39 2026-04-05 01:38:09.084915 | orchestrator | 2026-04-05 01:38:09 | INFO  | Live migration of 09032338-5e97-4fde-ad06-da67c25a0e39 (test-1) is still in progress 2026-04-05 01:38:11.473325 | orchestrator | 2026-04-05 01:38:11 | INFO  | Live migration of 09032338-5e97-4fde-ad06-da67c25a0e39 (test-1) is still in progress 2026-04-05 01:38:13.942276 | orchestrator | 2026-04-05 01:38:13 | INFO  | Live migration of 09032338-5e97-4fde-ad06-da67c25a0e39 (test-1) is still in progress 2026-04-05 01:38:16.336001 | orchestrator | 2026-04-05 01:38:16 | INFO  | Live migration of 09032338-5e97-4fde-ad06-da67c25a0e39 (test-1) is still in progress 2026-04-05 01:38:18.651247 | orchestrator | 2026-04-05 01:38:18 | INFO  | Live migration of 09032338-5e97-4fde-ad06-da67c25a0e39 (test-1) is still in progress 2026-04-05 01:38:20.932773 | orchestrator | 2026-04-05 01:38:20 | INFO  | Live migration of 09032338-5e97-4fde-ad06-da67c25a0e39 (test-1) is still in progress 2026-04-05 01:38:23.246714 | orchestrator | 2026-04-05 01:38:23 | INFO  | Live migration of 09032338-5e97-4fde-ad06-da67c25a0e39 (test-1) is still in progress 2026-04-05 01:38:25.613628 | orchestrator | 2026-04-05 01:38:25 | INFO  | Live migration of 09032338-5e97-4fde-ad06-da67c25a0e39 (test-1) is still in progress 2026-04-05 01:38:27.891560 | orchestrator | 2026-04-05 01:38:27 | INFO  | Live migration of 09032338-5e97-4fde-ad06-da67c25a0e39 (test-1) completed with status ACTIVE 2026-04-05 01:38:27.891678 | orchestrator | 2026-04-05 01:38:27 | INFO  | Live migrating server 42003049-8258-4648-9ccf-4b69d906ac3b 2026-04-05 01:38:38.294777 | orchestrator | 2026-04-05 01:38:38 | INFO  | Live migration of 42003049-8258-4648-9ccf-4b69d906ac3b (test-2) is still in progress 2026-04-05 01:38:40.649475 | orchestrator | 2026-04-05 01:38:40 | INFO  | Live migration of 42003049-8258-4648-9ccf-4b69d906ac3b (test-2) is still in progress 2026-04-05 01:38:43.005014 | orchestrator | 2026-04-05 01:38:43 | INFO  | Live migration of 42003049-8258-4648-9ccf-4b69d906ac3b (test-2) is still in progress 2026-04-05 01:38:45.310597 | orchestrator | 2026-04-05 01:38:45 | INFO  | Live migration of 42003049-8258-4648-9ccf-4b69d906ac3b (test-2) is still in progress 2026-04-05 01:38:47.582324 | orchestrator | 2026-04-05 01:38:47 | INFO  | Live migration of 42003049-8258-4648-9ccf-4b69d906ac3b (test-2) is still in progress 2026-04-05 01:38:49.889499 | orchestrator | 2026-04-05 01:38:49 | INFO  | Live migration of 42003049-8258-4648-9ccf-4b69d906ac3b (test-2) is still in progress 2026-04-05 01:38:52.182118 | orchestrator | 2026-04-05 01:38:52 | INFO  | Live migration of 42003049-8258-4648-9ccf-4b69d906ac3b (test-2) is still in progress 2026-04-05 01:38:54.498539 | orchestrator | 2026-04-05 01:38:54 | INFO  | Live migration of 42003049-8258-4648-9ccf-4b69d906ac3b (test-2) is still in progress 2026-04-05 01:38:56.870655 | orchestrator | 2026-04-05 01:38:56 | INFO  | Live migration of 42003049-8258-4648-9ccf-4b69d906ac3b (test-2) completed with status ACTIVE 2026-04-05 01:38:56.870779 | orchestrator | 2026-04-05 01:38:56 | INFO  | Live migrating server 5d0b6991-2fb9-4f5e-9c24-c349d2c523cd 2026-04-05 01:39:06.971645 | orchestrator | 2026-04-05 01:39:06 | INFO  | Live migration of 5d0b6991-2fb9-4f5e-9c24-c349d2c523cd (test) is still in progress 2026-04-05 01:39:09.363468 | orchestrator | 2026-04-05 01:39:09 | INFO  | Live migration of 5d0b6991-2fb9-4f5e-9c24-c349d2c523cd (test) is still in progress 2026-04-05 01:39:11.887422 | orchestrator | 2026-04-05 01:39:11 | INFO  | Live migration of 5d0b6991-2fb9-4f5e-9c24-c349d2c523cd (test) is still in progress 2026-04-05 01:39:14.185844 | orchestrator | 2026-04-05 01:39:14 | INFO  | Live migration of 5d0b6991-2fb9-4f5e-9c24-c349d2c523cd (test) is still in progress 2026-04-05 01:39:16.502380 | orchestrator | 2026-04-05 01:39:16 | INFO  | Live migration of 5d0b6991-2fb9-4f5e-9c24-c349d2c523cd (test) is still in progress 2026-04-05 01:39:18.813981 | orchestrator | 2026-04-05 01:39:18 | INFO  | Live migration of 5d0b6991-2fb9-4f5e-9c24-c349d2c523cd (test) is still in progress 2026-04-05 01:39:21.309360 | orchestrator | 2026-04-05 01:39:21 | INFO  | Live migration of 5d0b6991-2fb9-4f5e-9c24-c349d2c523cd (test) is still in progress 2026-04-05 01:39:23.695879 | orchestrator | 2026-04-05 01:39:23 | INFO  | Live migration of 5d0b6991-2fb9-4f5e-9c24-c349d2c523cd (test) is still in progress 2026-04-05 01:39:26.015065 | orchestrator | 2026-04-05 01:39:26 | INFO  | Live migration of 5d0b6991-2fb9-4f5e-9c24-c349d2c523cd (test) is still in progress 2026-04-05 01:39:28.381435 | orchestrator | 2026-04-05 01:39:28 | INFO  | Live migration of 5d0b6991-2fb9-4f5e-9c24-c349d2c523cd (test) is still in progress 2026-04-05 01:39:30.754689 | orchestrator | 2026-04-05 01:39:30 | INFO  | Live migration of 5d0b6991-2fb9-4f5e-9c24-c349d2c523cd (test) completed with status ACTIVE 2026-04-05 01:39:31.104361 | orchestrator | + compute_list 2026-04-05 01:39:31.104469 | orchestrator | + osism manage compute list testbed-node-3 2026-04-05 01:39:32.761151 | orchestrator | 2026-04-05 01:39:32 | ERROR  | Unable to get ansible vault password 2026-04-05 01:39:32.761218 | orchestrator | 2026-04-05 01:39:32 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-04-05 01:39:32.761232 | orchestrator | 2026-04-05 01:39:32 | ERROR  | Dropping encrypted entries 2026-04-05 01:39:33.909935 | orchestrator | +------+--------+----------+ 2026-04-05 01:39:33.910115 | orchestrator | | ID | Name | Status | 2026-04-05 01:39:33.910136 | orchestrator | |------+--------+----------| 2026-04-05 01:39:33.910148 | orchestrator | +------+--------+----------+ 2026-04-05 01:39:34.376687 | orchestrator | + osism manage compute list testbed-node-4 2026-04-05 01:39:36.059536 | orchestrator | 2026-04-05 01:39:36 | ERROR  | Unable to get ansible vault password 2026-04-05 01:39:36.059642 | orchestrator | 2026-04-05 01:39:36 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-04-05 01:39:36.059660 | orchestrator | 2026-04-05 01:39:36 | ERROR  | Dropping encrypted entries 2026-04-05 01:39:37.358270 | orchestrator | +------+--------+----------+ 2026-04-05 01:39:37.358451 | orchestrator | | ID | Name | Status | 2026-04-05 01:39:37.358471 | orchestrator | |------+--------+----------| 2026-04-05 01:39:37.358484 | orchestrator | +------+--------+----------+ 2026-04-05 01:39:37.733735 | orchestrator | + osism manage compute list testbed-node-5 2026-04-05 01:39:39.405925 | orchestrator | 2026-04-05 01:39:39 | ERROR  | Unable to get ansible vault password 2026-04-05 01:39:39.406098 | orchestrator | 2026-04-05 01:39:39 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-04-05 01:39:39.406119 | orchestrator | 2026-04-05 01:39:39 | ERROR  | Dropping encrypted entries 2026-04-05 01:39:41.131539 | orchestrator | +--------------------------------------+--------+----------+ 2026-04-05 01:39:41.132248 | orchestrator | | ID | Name | Status | 2026-04-05 01:39:41.132307 | orchestrator | |--------------------------------------+--------+----------| 2026-04-05 01:39:41.132319 | orchestrator | | e707db68-9716-42ed-ab71-c8afc05b7d2e | test-3 | ACTIVE | 2026-04-05 01:39:41.132329 | orchestrator | | df6a7122-009a-4906-ab7b-ffcfc5c9cde5 | test-4 | ACTIVE | 2026-04-05 01:39:41.132339 | orchestrator | | 09032338-5e97-4fde-ad06-da67c25a0e39 | test-1 | ACTIVE | 2026-04-05 01:39:41.132349 | orchestrator | | 42003049-8258-4648-9ccf-4b69d906ac3b | test-2 | ACTIVE | 2026-04-05 01:39:41.132359 | orchestrator | | 5d0b6991-2fb9-4f5e-9c24-c349d2c523cd | test | ACTIVE | 2026-04-05 01:39:41.132368 | orchestrator | +--------------------------------------+--------+----------+ 2026-04-05 01:39:41.457435 | orchestrator | + server_ping 2026-04-05 01:39:41.458374 | orchestrator | ++ openstack --os-cloud test floating ip list --status ACTIVE -f value -c 'Floating IP Address' 2026-04-05 01:39:41.458624 | orchestrator | ++ tr -d '\r' 2026-04-05 01:39:44.395912 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-04-05 01:39:44.396038 | orchestrator | + ping -c3 192.168.112.102 2026-04-05 01:39:44.412646 | orchestrator | PING 192.168.112.102 (192.168.112.102) 56(84) bytes of data. 2026-04-05 01:39:44.412713 | orchestrator | 64 bytes from 192.168.112.102: icmp_seq=1 ttl=63 time=11.2 ms 2026-04-05 01:39:45.405745 | orchestrator | 64 bytes from 192.168.112.102: icmp_seq=2 ttl=63 time=2.45 ms 2026-04-05 01:39:46.406953 | orchestrator | 64 bytes from 192.168.112.102: icmp_seq=3 ttl=63 time=1.63 ms 2026-04-05 01:39:46.407910 | orchestrator | 2026-04-05 01:39:46.407969 | orchestrator | --- 192.168.112.102 ping statistics --- 2026-04-05 01:39:46.407980 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-04-05 01:39:46.407990 | orchestrator | rtt min/avg/max/mdev = 1.628/5.090/11.196/4.330 ms 2026-04-05 01:39:46.407999 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-04-05 01:39:46.408008 | orchestrator | + ping -c3 192.168.112.116 2026-04-05 01:39:46.416545 | orchestrator | PING 192.168.112.116 (192.168.112.116) 56(84) bytes of data. 2026-04-05 01:39:46.416621 | orchestrator | 64 bytes from 192.168.112.116: icmp_seq=1 ttl=63 time=5.67 ms 2026-04-05 01:39:47.415473 | orchestrator | 64 bytes from 192.168.112.116: icmp_seq=2 ttl=63 time=2.31 ms 2026-04-05 01:39:48.416350 | orchestrator | 64 bytes from 192.168.112.116: icmp_seq=3 ttl=63 time=1.73 ms 2026-04-05 01:39:48.416448 | orchestrator | 2026-04-05 01:39:48.416464 | orchestrator | --- 192.168.112.116 ping statistics --- 2026-04-05 01:39:48.416477 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-04-05 01:39:48.416489 | orchestrator | rtt min/avg/max/mdev = 1.729/3.236/5.672/1.738 ms 2026-04-05 01:39:48.416501 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-04-05 01:39:48.416543 | orchestrator | + ping -c3 192.168.112.111 2026-04-05 01:39:48.430822 | orchestrator | PING 192.168.112.111 (192.168.112.111) 56(84) bytes of data. 2026-04-05 01:39:48.430909 | orchestrator | 64 bytes from 192.168.112.111: icmp_seq=1 ttl=63 time=9.11 ms 2026-04-05 01:39:49.425757 | orchestrator | 64 bytes from 192.168.112.111: icmp_seq=2 ttl=63 time=2.65 ms 2026-04-05 01:39:50.427437 | orchestrator | 64 bytes from 192.168.112.111: icmp_seq=3 ttl=63 time=2.08 ms 2026-04-05 01:39:50.427620 | orchestrator | 2026-04-05 01:39:50.427642 | orchestrator | --- 192.168.112.111 ping statistics --- 2026-04-05 01:39:50.427656 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2004ms 2026-04-05 01:39:50.427667 | orchestrator | rtt min/avg/max/mdev = 2.076/4.611/9.109/3.188 ms 2026-04-05 01:39:50.427809 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-04-05 01:39:50.427835 | orchestrator | + ping -c3 192.168.112.128 2026-04-05 01:39:50.439307 | orchestrator | PING 192.168.112.128 (192.168.112.128) 56(84) bytes of data. 2026-04-05 01:39:50.439400 | orchestrator | 64 bytes from 192.168.112.128: icmp_seq=1 ttl=63 time=5.87 ms 2026-04-05 01:39:51.437072 | orchestrator | 64 bytes from 192.168.112.128: icmp_seq=2 ttl=63 time=2.31 ms 2026-04-05 01:39:52.438108 | orchestrator | 64 bytes from 192.168.112.128: icmp_seq=3 ttl=63 time=1.54 ms 2026-04-05 01:39:52.438301 | orchestrator | 2026-04-05 01:39:52.438321 | orchestrator | --- 192.168.112.128 ping statistics --- 2026-04-05 01:39:52.438335 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-04-05 01:39:52.438347 | orchestrator | rtt min/avg/max/mdev = 1.538/3.239/5.874/1.889 ms 2026-04-05 01:39:52.438459 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-04-05 01:39:52.438477 | orchestrator | + ping -c3 192.168.112.193 2026-04-05 01:39:52.449380 | orchestrator | PING 192.168.112.193 (192.168.112.193) 56(84) bytes of data. 2026-04-05 01:39:52.449411 | orchestrator | 64 bytes from 192.168.112.193: icmp_seq=1 ttl=63 time=5.12 ms 2026-04-05 01:39:53.448902 | orchestrator | 64 bytes from 192.168.112.193: icmp_seq=2 ttl=63 time=2.82 ms 2026-04-05 01:39:54.449119 | orchestrator | 64 bytes from 192.168.112.193: icmp_seq=3 ttl=63 time=1.71 ms 2026-04-05 01:39:54.449228 | orchestrator | 2026-04-05 01:39:54.449245 | orchestrator | --- 192.168.112.193 ping statistics --- 2026-04-05 01:39:54.449343 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-04-05 01:39:54.449362 | orchestrator | rtt min/avg/max/mdev = 1.714/3.219/5.122/1.419 ms 2026-04-05 01:39:54.819255 | orchestrator | ok: Runtime: 0:19:43.257939 2026-04-05 01:39:54.880115 | 2026-04-05 01:39:54.880364 | TASK [Run tempest] 2026-04-05 01:39:55.679350 | orchestrator | 2026-04-05 01:39:55.679547 | orchestrator | # Tempest 2026-04-05 01:39:55.679580 | orchestrator | 2026-04-05 01:39:55.679600 | orchestrator | + set -e 2026-04-05 01:39:55.679625 | orchestrator | + source /opt/manager-vars.sh 2026-04-05 01:39:55.679649 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-04-05 01:39:55.679676 | orchestrator | ++ NUMBER_OF_NODES=6 2026-04-05 01:39:55.679730 | orchestrator | ++ export CEPH_VERSION=reef 2026-04-05 01:39:55.679760 | orchestrator | ++ CEPH_VERSION=reef 2026-04-05 01:39:55.679780 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-04-05 01:39:55.679799 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-04-05 01:39:55.679828 | orchestrator | ++ export MANAGER_VERSION=latest 2026-04-05 01:39:55.679851 | orchestrator | ++ MANAGER_VERSION=latest 2026-04-05 01:39:55.679869 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-04-05 01:39:55.679894 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-04-05 01:39:55.679909 | orchestrator | ++ export ARA=false 2026-04-05 01:39:55.679926 | orchestrator | ++ ARA=false 2026-04-05 01:39:55.679953 | orchestrator | ++ export DEPLOY_MODE=manager 2026-04-05 01:39:55.679969 | orchestrator | ++ DEPLOY_MODE=manager 2026-04-05 01:39:55.679984 | orchestrator | ++ export TEMPEST=true 2026-04-05 01:39:55.680004 | orchestrator | ++ TEMPEST=true 2026-04-05 01:39:55.680020 | orchestrator | ++ export IS_ZUUL=true 2026-04-05 01:39:55.680036 | orchestrator | ++ IS_ZUUL=true 2026-04-05 01:39:55.680053 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.221 2026-04-05 01:39:55.680071 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.221 2026-04-05 01:39:55.680087 | orchestrator | ++ export EXTERNAL_API=false 2026-04-05 01:39:55.680104 | orchestrator | ++ EXTERNAL_API=false 2026-04-05 01:39:55.680120 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-04-05 01:39:55.680136 | orchestrator | ++ IMAGE_USER=ubuntu 2026-04-05 01:39:55.680152 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-04-05 01:39:55.680169 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-04-05 01:39:55.680186 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-04-05 01:39:55.680201 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-04-05 01:39:55.680218 | orchestrator | + echo 2026-04-05 01:39:55.680234 | orchestrator | + echo '# Tempest' 2026-04-05 01:39:55.680307 | orchestrator | + echo 2026-04-05 01:39:55.680327 | orchestrator | + [[ ! -e /opt/tempest ]] 2026-04-05 01:39:55.680343 | orchestrator | + osism apply tempest --skip-tags run-tempest 2026-04-05 01:40:07.190335 | orchestrator | 2026-04-05 01:40:07 | INFO  | Prepare task for execution of tempest. 2026-04-05 01:40:07.278572 | orchestrator | 2026-04-05 01:40:07 | INFO  | Task a07b30f2-63f3-4fbb-81f3-3da3731cdf4c (tempest) was prepared for execution. 2026-04-05 01:40:07.278673 | orchestrator | 2026-04-05 01:40:07 | INFO  | It takes a moment until task a07b30f2-63f3-4fbb-81f3-3da3731cdf4c (tempest) has been started and output is visible here. 2026-04-05 01:41:30.697287 | orchestrator | 2026-04-05 01:41:30.698207 | orchestrator | PLAY [Run tempest] ************************************************************* 2026-04-05 01:41:30.698245 | orchestrator | 2026-04-05 01:41:30.698261 | orchestrator | TASK [osism.validations.tempest : Create tempest workdir] ********************** 2026-04-05 01:41:30.698291 | orchestrator | Sunday 05 April 2026 01:40:10 +0000 (0:00:00.326) 0:00:00.326 ********** 2026-04-05 01:41:30.698306 | orchestrator | changed: [testbed-manager] 2026-04-05 01:41:30.698323 | orchestrator | 2026-04-05 01:41:30.698337 | orchestrator | TASK [osism.validations.tempest : Copy tempest wrapper script] ***************** 2026-04-05 01:41:30.698351 | orchestrator | Sunday 05 April 2026 01:40:11 +0000 (0:00:00.927) 0:00:01.254 ********** 2026-04-05 01:41:30.698367 | orchestrator | changed: [testbed-manager] 2026-04-05 01:41:30.698380 | orchestrator | 2026-04-05 01:41:30.698394 | orchestrator | TASK [osism.validations.tempest : Check for existing tempest initialisation] *** 2026-04-05 01:41:30.698406 | orchestrator | Sunday 05 April 2026 01:40:12 +0000 (0:00:01.280) 0:00:02.534 ********** 2026-04-05 01:41:30.698418 | orchestrator | ok: [testbed-manager] 2026-04-05 01:41:30.698429 | orchestrator | 2026-04-05 01:41:30.698441 | orchestrator | TASK [osism.validations.tempest : Init tempest] ******************************** 2026-04-05 01:41:30.698453 | orchestrator | Sunday 05 April 2026 01:40:13 +0000 (0:00:00.486) 0:00:03.021 ********** 2026-04-05 01:41:30.698464 | orchestrator | changed: [testbed-manager] 2026-04-05 01:41:30.698476 | orchestrator | 2026-04-05 01:41:30.698488 | orchestrator | TASK [osism.validations.tempest : Resolve image IDs] *************************** 2026-04-05 01:41:30.698500 | orchestrator | Sunday 05 April 2026 01:40:37 +0000 (0:00:23.775) 0:00:26.797 ********** 2026-04-05 01:41:30.698533 | orchestrator | ok: [testbed-manager -> localhost] => (item=Cirros 0.6.3) 2026-04-05 01:41:30.698541 | orchestrator | ok: [testbed-manager -> localhost] => (item=Cirros 0.6.2) 2026-04-05 01:41:30.698551 | orchestrator | 2026-04-05 01:41:30.698558 | orchestrator | TASK [osism.validations.tempest : Assert images have been resolved] ************ 2026-04-05 01:41:30.698564 | orchestrator | Sunday 05 April 2026 01:40:46 +0000 (0:00:09.447) 0:00:36.244 ********** 2026-04-05 01:41:30.698571 | orchestrator | ok: [testbed-manager] => { 2026-04-05 01:41:30.698578 | orchestrator |  "changed": false, 2026-04-05 01:41:30.698585 | orchestrator |  "msg": "All assertions passed" 2026-04-05 01:41:30.698592 | orchestrator | } 2026-04-05 01:41:30.698599 | orchestrator | 2026-04-05 01:41:30.698606 | orchestrator | TASK [osism.validations.tempest : Get auth token] ****************************** 2026-04-05 01:41:30.698612 | orchestrator | Sunday 05 April 2026 01:40:46 +0000 (0:00:00.168) 0:00:36.413 ********** 2026-04-05 01:41:30.698619 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-05 01:41:30.698625 | orchestrator | 2026-04-05 01:41:30.698632 | orchestrator | TASK [osism.validations.tempest : Get endpoint catalog] ************************ 2026-04-05 01:41:30.698639 | orchestrator | Sunday 05 April 2026 01:40:50 +0000 (0:00:03.857) 0:00:40.270 ********** 2026-04-05 01:41:30.698646 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-05 01:41:30.698652 | orchestrator | 2026-04-05 01:41:30.698659 | orchestrator | TASK [osism.validations.tempest : Get service catalog] ************************* 2026-04-05 01:41:30.698665 | orchestrator | Sunday 05 April 2026 01:40:52 +0000 (0:00:01.940) 0:00:42.210 ********** 2026-04-05 01:41:30.698672 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-05 01:41:30.698679 | orchestrator | 2026-04-05 01:41:30.698685 | orchestrator | TASK [osism.validations.tempest : Register img_file name] ********************** 2026-04-05 01:41:30.698692 | orchestrator | Sunday 05 April 2026 01:40:56 +0000 (0:00:03.846) 0:00:46.057 ********** 2026-04-05 01:41:30.698698 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-05 01:41:30.698705 | orchestrator | 2026-04-05 01:41:30.698711 | orchestrator | TASK [osism.validations.tempest : Download img_file from image_ref] ************ 2026-04-05 01:41:30.698718 | orchestrator | Sunday 05 April 2026 01:40:56 +0000 (0:00:00.201) 0:00:46.258 ********** 2026-04-05 01:41:30.698724 | orchestrator | changed: [testbed-manager] 2026-04-05 01:41:30.698731 | orchestrator | 2026-04-05 01:41:30.698738 | orchestrator | TASK [osism.validations.tempest : Install qemu-utils package] ****************** 2026-04-05 01:41:30.698745 | orchestrator | Sunday 05 April 2026 01:40:58 +0000 (0:00:02.467) 0:00:48.725 ********** 2026-04-05 01:41:30.698751 | orchestrator | changed: [testbed-manager] 2026-04-05 01:41:30.698758 | orchestrator | 2026-04-05 01:41:30.698765 | orchestrator | TASK [osism.validations.tempest : Convert img_file to qcow2 format] ************ 2026-04-05 01:41:30.698771 | orchestrator | Sunday 05 April 2026 01:41:09 +0000 (0:00:10.910) 0:00:59.635 ********** 2026-04-05 01:41:30.698778 | orchestrator | changed: [testbed-manager] 2026-04-05 01:41:30.698784 | orchestrator | 2026-04-05 01:41:30.698791 | orchestrator | TASK [osism.validations.tempest : Get network API extensions] ****************** 2026-04-05 01:41:30.698798 | orchestrator | Sunday 05 April 2026 01:41:10 +0000 (0:00:00.765) 0:01:00.401 ********** 2026-04-05 01:41:30.698804 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-05 01:41:30.698811 | orchestrator | 2026-04-05 01:41:30.698818 | orchestrator | TASK [osism.validations.tempest : Revoke token] ******************************** 2026-04-05 01:41:30.698824 | orchestrator | Sunday 05 April 2026 01:41:12 +0000 (0:00:01.580) 0:01:01.981 ********** 2026-04-05 01:41:30.698831 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-05 01:41:30.698837 | orchestrator | 2026-04-05 01:41:30.698844 | orchestrator | TASK [osism.validations.tempest : Set fact for config option api_extensions] *** 2026-04-05 01:41:30.698851 | orchestrator | Sunday 05 April 2026 01:41:13 +0000 (0:00:01.602) 0:01:03.584 ********** 2026-04-05 01:41:30.698858 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-05 01:41:30.698864 | orchestrator | 2026-04-05 01:41:30.698871 | orchestrator | TASK [osism.validations.tempest : Set fact for config option img_file] ********* 2026-04-05 01:41:30.698886 | orchestrator | Sunday 05 April 2026 01:41:14 +0000 (0:00:00.199) 0:01:03.784 ********** 2026-04-05 01:41:30.698898 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-05 01:41:30.698908 | orchestrator | 2026-04-05 01:41:30.698928 | orchestrator | TASK [osism.validations.tempest : Resolve floating network ID] ***************** 2026-04-05 01:41:30.698939 | orchestrator | Sunday 05 April 2026 01:41:14 +0000 (0:00:00.400) 0:01:04.184 ********** 2026-04-05 01:41:30.698950 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-05 01:41:30.698957 | orchestrator | 2026-04-05 01:41:30.698964 | orchestrator | TASK [osism.validations.tempest : Assert floating network id has been resolved] *** 2026-04-05 01:41:30.698989 | orchestrator | Sunday 05 April 2026 01:41:18 +0000 (0:00:03.980) 0:01:08.164 ********** 2026-04-05 01:41:30.698996 | orchestrator | ok: [testbed-manager -> localhost] => { 2026-04-05 01:41:30.699003 | orchestrator |  "changed": false, 2026-04-05 01:41:30.699010 | orchestrator |  "msg": "All assertions passed" 2026-04-05 01:41:30.699016 | orchestrator | } 2026-04-05 01:41:30.699023 | orchestrator | 2026-04-05 01:41:30.699030 | orchestrator | TASK [osism.validations.tempest : Resolve flavor IDs] ************************** 2026-04-05 01:41:30.699037 | orchestrator | Sunday 05 April 2026 01:41:18 +0000 (0:00:00.194) 0:01:08.358 ********** 2026-04-05 01:41:30.699044 | orchestrator | skipping: [testbed-manager] => (item={'name': 'tempest-1', 'vcpus': 1, 'ram': 1024, 'disk': 1})  2026-04-05 01:41:30.699052 | orchestrator | skipping: [testbed-manager] => (item={'name': 'tempest-2', 'vcpus': 2, 'ram': 2048, 'disk': 2})  2026-04-05 01:41:30.699059 | orchestrator | skipping: [testbed-manager] 2026-04-05 01:41:30.699066 | orchestrator | 2026-04-05 01:41:30.699072 | orchestrator | TASK [osism.validations.tempest : Assert flavors have been resolved] *********** 2026-04-05 01:41:30.699079 | orchestrator | Sunday 05 April 2026 01:41:18 +0000 (0:00:00.202) 0:01:08.561 ********** 2026-04-05 01:41:30.699086 | orchestrator | skipping: [testbed-manager] 2026-04-05 01:41:30.699092 | orchestrator | 2026-04-05 01:41:30.699099 | orchestrator | TASK [osism.validations.tempest : Get stats of exclude list] ******************* 2026-04-05 01:41:30.699106 | orchestrator | Sunday 05 April 2026 01:41:19 +0000 (0:00:00.179) 0:01:08.740 ********** 2026-04-05 01:41:30.699112 | orchestrator | ok: [testbed-manager] 2026-04-05 01:41:30.699119 | orchestrator | 2026-04-05 01:41:30.699126 | orchestrator | TASK [osism.validations.tempest : Copy exclude list] *************************** 2026-04-05 01:41:30.699132 | orchestrator | Sunday 05 April 2026 01:41:19 +0000 (0:00:00.512) 0:01:09.253 ********** 2026-04-05 01:41:30.699163 | orchestrator | changed: [testbed-manager] 2026-04-05 01:41:30.699171 | orchestrator | 2026-04-05 01:41:30.699178 | orchestrator | TASK [osism.validations.tempest : Get stats of include list] ******************* 2026-04-05 01:41:30.699184 | orchestrator | Sunday 05 April 2026 01:41:20 +0000 (0:00:00.977) 0:01:10.231 ********** 2026-04-05 01:41:30.699191 | orchestrator | ok: [testbed-manager] 2026-04-05 01:41:30.699198 | orchestrator | 2026-04-05 01:41:30.699205 | orchestrator | TASK [osism.validations.tempest : Copy include list] *************************** 2026-04-05 01:41:30.699211 | orchestrator | Sunday 05 April 2026 01:41:20 +0000 (0:00:00.477) 0:01:10.709 ********** 2026-04-05 01:41:30.699218 | orchestrator | skipping: [testbed-manager] 2026-04-05 01:41:30.699225 | orchestrator | 2026-04-05 01:41:30.699231 | orchestrator | TASK [osism.validations.tempest : Create tempest flavors] ********************** 2026-04-05 01:41:30.699238 | orchestrator | Sunday 05 April 2026 01:41:21 +0000 (0:00:00.343) 0:01:11.052 ********** 2026-04-05 01:41:30.699245 | orchestrator | changed: [testbed-manager -> localhost] => (item={'name': 'tempest-1', 'vcpus': 1, 'ram': 1024, 'disk': 1}) 2026-04-05 01:41:30.699252 | orchestrator | changed: [testbed-manager -> localhost] => (item={'name': 'tempest-2', 'vcpus': 2, 'ram': 2048, 'disk': 2}) 2026-04-05 01:41:30.699258 | orchestrator | 2026-04-05 01:41:30.699265 | orchestrator | TASK [osism.validations.tempest : Copy tempest.conf file] ********************** 2026-04-05 01:41:30.699272 | orchestrator | Sunday 05 April 2026 01:41:29 +0000 (0:00:08.317) 0:01:19.369 ********** 2026-04-05 01:41:30.699278 | orchestrator | changed: [testbed-manager] 2026-04-05 01:41:30.699291 | orchestrator | 2026-04-05 01:41:30.699298 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-05 01:41:30.699306 | orchestrator | testbed-manager : ok=24  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-04-05 01:41:30.699313 | orchestrator | 2026-04-05 01:41:30.699320 | orchestrator | 2026-04-05 01:41:30.699326 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-05 01:41:30.699403 | orchestrator | Sunday 05 April 2026 01:41:30 +0000 (0:00:01.039) 0:01:20.408 ********** 2026-04-05 01:41:30.699411 | orchestrator | =============================================================================== 2026-04-05 01:41:30.699418 | orchestrator | osism.validations.tempest : Init tempest ------------------------------- 23.78s 2026-04-05 01:41:30.699425 | orchestrator | osism.validations.tempest : Install qemu-utils package ----------------- 10.91s 2026-04-05 01:41:30.699432 | orchestrator | osism.validations.tempest : Resolve image IDs --------------------------- 9.45s 2026-04-05 01:41:30.699439 | orchestrator | osism.validations.tempest : Create tempest flavors ---------------------- 8.32s 2026-04-05 01:41:30.699452 | orchestrator | osism.validations.tempest : Resolve floating network ID ----------------- 3.98s 2026-04-05 01:41:30.699459 | orchestrator | osism.validations.tempest : Get auth token ------------------------------ 3.86s 2026-04-05 01:41:30.699466 | orchestrator | osism.validations.tempest : Get service catalog ------------------------- 3.85s 2026-04-05 01:41:30.699473 | orchestrator | osism.validations.tempest : Download img_file from image_ref ------------ 2.47s 2026-04-05 01:41:30.699480 | orchestrator | osism.validations.tempest : Get endpoint catalog ------------------------ 1.94s 2026-04-05 01:41:30.699487 | orchestrator | osism.validations.tempest : Revoke token -------------------------------- 1.60s 2026-04-05 01:41:30.699494 | orchestrator | osism.validations.tempest : Get network API extensions ------------------ 1.58s 2026-04-05 01:41:30.699501 | orchestrator | osism.validations.tempest : Copy tempest wrapper script ----------------- 1.28s 2026-04-05 01:41:30.699508 | orchestrator | osism.validations.tempest : Copy tempest.conf file ---------------------- 1.04s 2026-04-05 01:41:30.699515 | orchestrator | osism.validations.tempest : Copy exclude list --------------------------- 0.98s 2026-04-05 01:41:30.699522 | orchestrator | osism.validations.tempest : Create tempest workdir ---------------------- 0.93s 2026-04-05 01:41:30.699529 | orchestrator | osism.validations.tempest : Convert img_file to qcow2 format ------------ 0.77s 2026-04-05 01:41:30.699536 | orchestrator | osism.validations.tempest : Get stats of exclude list ------------------- 0.51s 2026-04-05 01:41:30.699549 | orchestrator | osism.validations.tempest : Check for existing tempest initialisation --- 0.49s 2026-04-05 01:41:30.975211 | orchestrator | osism.validations.tempest : Get stats of include list ------------------- 0.48s 2026-04-05 01:41:30.975327 | orchestrator | osism.validations.tempest : Set fact for config option img_file --------- 0.40s 2026-04-05 01:41:31.185109 | orchestrator | + sed -i '/log_dir =/d' /opt/tempest/etc/tempest.conf 2026-04-05 01:41:31.190406 | orchestrator | + sed -i '/log_file =/d' /opt/tempest/etc/tempest.conf 2026-04-05 01:41:31.193414 | orchestrator | 2026-04-05 01:41:31.193462 | orchestrator | ## IDENTITY (API) 2026-04-05 01:41:31.193472 | orchestrator | 2026-04-05 01:41:31.193480 | orchestrator | + [[ false == \t\r\u\e ]] 2026-04-05 01:41:31.193488 | orchestrator | + echo 2026-04-05 01:41:31.193497 | orchestrator | + echo '## IDENTITY (API)' 2026-04-05 01:41:31.193505 | orchestrator | + echo 2026-04-05 01:41:31.193513 | orchestrator | + _tempest tempest.api.identity.v3 2026-04-05 01:41:31.193522 | orchestrator | + local regex=tempest.api.identity.v3 2026-04-05 01:41:31.194445 | orchestrator | + docker run --rm -v /opt/tempest:/tempest -v /etc/ssl/certs:/etc/ssl/certs:ro -e PYTHONWARNINGS=ignore::SyntaxWarning --network host --name tempest registry.osism.tech/osism/tempest:latest run --workspace-path /tempest/workspace.yaml --workspace tempest --exclude-list /tempest/exclude.lst --regex tempest.api.identity.v3 --concurrency 16 2026-04-05 01:41:31.195927 | orchestrator | ++ date +%Y%m%d-%H%M 2026-04-05 01:41:31.198688 | orchestrator | + tee -a /opt/tempest/20260405-0141.log 2026-04-05 01:41:34.966005 | orchestrator | tempest: 'run --workspace-path /tempest/workspace.yaml --workspace tempest --exclude-list /tempest/exclude.lst --regex tempest.api.identity.v3 --concurrency 16' is not a tempest command. See 'tempest --help'. 2026-04-05 01:41:34.966237 | orchestrator | Did you mean one of these? 2026-04-05 01:41:34.966262 | orchestrator | help 2026-04-05 01:41:34.966333 | orchestrator | init 2026-04-05 01:41:35.387079 | orchestrator | 2026-04-05 01:41:35.387205 | orchestrator | ## IMAGE (API) 2026-04-05 01:41:35.387221 | orchestrator | 2026-04-05 01:41:35.387233 | orchestrator | + echo 2026-04-05 01:41:35.387244 | orchestrator | + echo '## IMAGE (API)' 2026-04-05 01:41:35.387256 | orchestrator | + echo 2026-04-05 01:41:35.387268 | orchestrator | + _tempest tempest.api.image.v2 2026-04-05 01:41:35.387279 | orchestrator | + local regex=tempest.api.image.v2 2026-04-05 01:41:35.387945 | orchestrator | + docker run --rm -v /opt/tempest:/tempest -v /etc/ssl/certs:/etc/ssl/certs:ro -e PYTHONWARNINGS=ignore::SyntaxWarning --network host --name tempest registry.osism.tech/osism/tempest:latest run --workspace-path /tempest/workspace.yaml --workspace tempest --exclude-list /tempest/exclude.lst --regex tempest.api.image.v2 --concurrency 16 2026-04-05 01:41:35.389353 | orchestrator | ++ date +%Y%m%d-%H%M 2026-04-05 01:41:35.392829 | orchestrator | + tee -a /opt/tempest/20260405-0141.log 2026-04-05 01:41:39.086171 | orchestrator | tempest: 'run --workspace-path /tempest/workspace.yaml --workspace tempest --exclude-list /tempest/exclude.lst --regex tempest.api.image.v2 --concurrency 16' is not a tempest command. See 'tempest --help'. 2026-04-05 01:41:39.086269 | orchestrator | Did you mean one of these? 2026-04-05 01:41:39.086283 | orchestrator | help 2026-04-05 01:41:39.086294 | orchestrator | init 2026-04-05 01:41:39.485826 | orchestrator | 2026-04-05 01:41:39.485931 | orchestrator | ## NETWORK (API) 2026-04-05 01:41:39.485945 | orchestrator | 2026-04-05 01:41:39.485957 | orchestrator | + echo 2026-04-05 01:41:39.485968 | orchestrator | + echo '## NETWORK (API)' 2026-04-05 01:41:39.485980 | orchestrator | + echo 2026-04-05 01:41:39.485990 | orchestrator | + _tempest tempest.api.network 2026-04-05 01:41:39.486001 | orchestrator | + local regex=tempest.api.network 2026-04-05 01:41:39.486231 | orchestrator | + docker run --rm -v /opt/tempest:/tempest -v /etc/ssl/certs:/etc/ssl/certs:ro -e PYTHONWARNINGS=ignore::SyntaxWarning --network host --name tempest registry.osism.tech/osism/tempest:latest run --workspace-path /tempest/workspace.yaml --workspace tempest --exclude-list /tempest/exclude.lst --regex tempest.api.network --concurrency 16 2026-04-05 01:41:39.486858 | orchestrator | ++ date +%Y%m%d-%H%M 2026-04-05 01:41:39.489579 | orchestrator | + tee -a /opt/tempest/20260405-0141.log 2026-04-05 01:41:43.224565 | orchestrator | tempest: 'run --workspace-path /tempest/workspace.yaml --workspace tempest --exclude-list /tempest/exclude.lst --regex tempest.api.network --concurrency 16' is not a tempest command. See 'tempest --help'. 2026-04-05 01:41:43.224691 | orchestrator | Did you mean one of these? 2026-04-05 01:41:43.224716 | orchestrator | help 2026-04-05 01:41:43.224737 | orchestrator | init 2026-04-05 01:41:43.628373 | orchestrator | 2026-04-05 01:41:43.628455 | orchestrator | ## VOLUME (API) 2026-04-05 01:41:43.628464 | orchestrator | 2026-04-05 01:41:43.628469 | orchestrator | + echo 2026-04-05 01:41:43.628473 | orchestrator | + echo '## VOLUME (API)' 2026-04-05 01:41:43.628478 | orchestrator | + echo 2026-04-05 01:41:43.628482 | orchestrator | + _tempest tempest.api.volume 2026-04-05 01:41:43.628486 | orchestrator | + local regex=tempest.api.volume 2026-04-05 01:41:43.629488 | orchestrator | + docker run --rm -v /opt/tempest:/tempest -v /etc/ssl/certs:/etc/ssl/certs:ro -e PYTHONWARNINGS=ignore::SyntaxWarning --network host --name tempest registry.osism.tech/osism/tempest:latest run --workspace-path /tempest/workspace.yaml --workspace tempest --exclude-list /tempest/exclude.lst --regex tempest.api.volume --concurrency 16 2026-04-05 01:41:43.631374 | orchestrator | ++ date +%Y%m%d-%H%M 2026-04-05 01:41:43.636112 | orchestrator | + tee -a /opt/tempest/20260405-0141.log 2026-04-05 01:41:47.386794 | orchestrator | tempest: 'run --workspace-path /tempest/workspace.yaml --workspace tempest --exclude-list /tempest/exclude.lst --regex tempest.api.volume --concurrency 16' is not a tempest command. See 'tempest --help'. 2026-04-05 01:41:47.386894 | orchestrator | Did you mean one of these? 2026-04-05 01:41:47.386908 | orchestrator | help 2026-04-05 01:41:47.386918 | orchestrator | init 2026-04-05 01:41:47.819647 | orchestrator | 2026-04-05 01:41:47.819746 | orchestrator | ## COMPUTE (API) 2026-04-05 01:41:47.819766 | orchestrator | 2026-04-05 01:41:47.819790 | orchestrator | + echo 2026-04-05 01:41:47.819802 | orchestrator | + echo '## COMPUTE (API)' 2026-04-05 01:41:47.819814 | orchestrator | + echo 2026-04-05 01:41:47.819825 | orchestrator | + _tempest tempest.api.compute 2026-04-05 01:41:47.819866 | orchestrator | + local regex=tempest.api.compute 2026-04-05 01:41:47.819881 | orchestrator | + docker run --rm -v /opt/tempest:/tempest -v /etc/ssl/certs:/etc/ssl/certs:ro -e PYTHONWARNINGS=ignore::SyntaxWarning --network host --name tempest registry.osism.tech/osism/tempest:latest run --workspace-path /tempest/workspace.yaml --workspace tempest --exclude-list /tempest/exclude.lst --regex tempest.api.compute --concurrency 16 2026-04-05 01:41:47.820961 | orchestrator | ++ date +%Y%m%d-%H%M 2026-04-05 01:41:47.823989 | orchestrator | + tee -a /opt/tempest/20260405-0141.log 2026-04-05 01:41:51.571736 | orchestrator | tempest: 'run --workspace-path /tempest/workspace.yaml --workspace tempest --exclude-list /tempest/exclude.lst --regex tempest.api.compute --concurrency 16' is not a tempest command. See 'tempest --help'. 2026-04-05 01:41:51.571811 | orchestrator | Did you mean one of these? 2026-04-05 01:41:51.571818 | orchestrator | help 2026-04-05 01:41:51.571822 | orchestrator | init 2026-04-05 01:41:52.127738 | orchestrator | 2026-04-05 01:41:52.127859 | orchestrator | ## DNS (API) 2026-04-05 01:41:52.127877 | orchestrator | 2026-04-05 01:41:52.128809 | orchestrator | + echo 2026-04-05 01:41:52.128848 | orchestrator | + echo '## DNS (API)' 2026-04-05 01:41:52.128867 | orchestrator | + echo 2026-04-05 01:41:52.128886 | orchestrator | + _tempest designate_tempest_plugin.tests.api.v2 2026-04-05 01:41:52.128905 | orchestrator | + local regex=designate_tempest_plugin.tests.api.v2 2026-04-05 01:41:52.129183 | orchestrator | + docker run --rm -v /opt/tempest:/tempest -v /etc/ssl/certs:/etc/ssl/certs:ro -e PYTHONWARNINGS=ignore::SyntaxWarning --network host --name tempest registry.osism.tech/osism/tempest:latest run --workspace-path /tempest/workspace.yaml --workspace tempest --exclude-list /tempest/exclude.lst --regex designate_tempest_plugin.tests.api.v2 --concurrency 16 2026-04-05 01:41:52.130078 | orchestrator | ++ date +%Y%m%d-%H%M 2026-04-05 01:41:52.133839 | orchestrator | + tee -a /opt/tempest/20260405-0141.log 2026-04-05 01:41:55.927201 | orchestrator | tempest: 'run --workspace-path /tempest/workspace.yaml --workspace tempest --exclude-list /tempest/exclude.lst --regex designate_tempest_plugin.tests.api.v2 --concurrency 16' is not a tempest command. See 'tempest --help'. 2026-04-05 01:41:55.927330 | orchestrator | Did you mean one of these? 2026-04-05 01:41:55.927356 | orchestrator | help 2026-04-05 01:41:55.927376 | orchestrator | init 2026-04-05 01:41:56.377341 | orchestrator | 2026-04-05 01:41:56.377499 | orchestrator | ## OBJECT-STORE (API) 2026-04-05 01:41:56.377519 | orchestrator | 2026-04-05 01:41:56.377531 | orchestrator | + echo 2026-04-05 01:41:56.377543 | orchestrator | + echo '## OBJECT-STORE (API)' 2026-04-05 01:41:56.377554 | orchestrator | + echo 2026-04-05 01:41:56.377565 | orchestrator | + _tempest tempest.api.object_storage 2026-04-05 01:41:56.377577 | orchestrator | + local regex=tempest.api.object_storage 2026-04-05 01:41:56.377607 | orchestrator | + docker run --rm -v /opt/tempest:/tempest -v /etc/ssl/certs:/etc/ssl/certs:ro -e PYTHONWARNINGS=ignore::SyntaxWarning --network host --name tempest registry.osism.tech/osism/tempest:latest run --workspace-path /tempest/workspace.yaml --workspace tempest --exclude-list /tempest/exclude.lst --regex tempest.api.object_storage --concurrency 16 2026-04-05 01:41:56.377781 | orchestrator | ++ date +%Y%m%d-%H%M 2026-04-05 01:41:56.380239 | orchestrator | + tee -a /opt/tempest/20260405-0141.log 2026-04-05 01:42:00.161678 | orchestrator | tempest: 'run --workspace-path /tempest/workspace.yaml --workspace tempest --exclude-list /tempest/exclude.lst --regex tempest.api.object_storage --concurrency 16' is not a tempest command. See 'tempest --help'. 2026-04-05 01:42:00.161787 | orchestrator | Did you mean one of these? 2026-04-05 01:42:00.161803 | orchestrator | help 2026-04-05 01:42:00.161815 | orchestrator | init 2026-04-05 01:42:01.015820 | orchestrator | ok: Runtime: 0:02:05.343538 2026-04-05 01:42:01.038691 | 2026-04-05 01:42:01.038875 | TASK [Check prometheus alert status] 2026-04-05 01:42:01.575639 | orchestrator | skipping: Conditional result was False 2026-04-05 01:42:01.579272 | 2026-04-05 01:42:01.579440 | PLAY RECAP 2026-04-05 01:42:01.579579 | orchestrator | ok: 25 changed: 12 unreachable: 0 failed: 0 skipped: 4 rescued: 0 ignored: 0 2026-04-05 01:42:01.579692 | 2026-04-05 01:42:01.823149 | RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/deploy.yml@main] 2026-04-05 01:42:01.825630 | POST-RUN START: [untrusted : github.com/osism/testbed/playbooks/post.yml@main] 2026-04-05 01:42:02.600080 | 2026-04-05 01:42:02.600293 | PLAY [Post output play] 2026-04-05 01:42:02.617541 | 2026-04-05 01:42:02.617694 | LOOP [stage-output : Register sources] 2026-04-05 01:42:02.689379 | 2026-04-05 01:42:02.689719 | TASK [stage-output : Check sudo] 2026-04-05 01:42:03.571976 | orchestrator | sudo: a password is required 2026-04-05 01:42:03.727825 | orchestrator | ok: Runtime: 0:00:00.009531 2026-04-05 01:42:03.735466 | 2026-04-05 01:42:03.735600 | LOOP [stage-output : Set source and destination for files and folders] 2026-04-05 01:42:03.769118 | 2026-04-05 01:42:03.769501 | TASK [stage-output : Build a list of source, dest dictionaries] 2026-04-05 01:42:03.839598 | orchestrator | ok 2026-04-05 01:42:03.848569 | 2026-04-05 01:42:03.848721 | LOOP [stage-output : Ensure target folders exist] 2026-04-05 01:42:04.311509 | orchestrator | ok: "docs" 2026-04-05 01:42:04.311824 | 2026-04-05 01:42:04.581486 | orchestrator | ok: "artifacts" 2026-04-05 01:42:04.840877 | orchestrator | ok: "logs" 2026-04-05 01:42:04.858969 | 2026-04-05 01:42:04.859216 | LOOP [stage-output : Copy files and folders to staging folder] 2026-04-05 01:42:04.909255 | 2026-04-05 01:42:04.909547 | TASK [stage-output : Make all log files readable] 2026-04-05 01:42:05.202123 | orchestrator | ok 2026-04-05 01:42:05.210880 | 2026-04-05 01:42:05.211036 | TASK [stage-output : Rename log files that match extensions_to_txt] 2026-04-05 01:42:05.245790 | orchestrator | skipping: Conditional result was False 2026-04-05 01:42:05.259974 | 2026-04-05 01:42:05.260141 | TASK [stage-output : Discover log files for compression] 2026-04-05 01:42:05.284493 | orchestrator | skipping: Conditional result was False 2026-04-05 01:42:05.294442 | 2026-04-05 01:42:05.294583 | LOOP [stage-output : Archive everything from logs] 2026-04-05 01:42:05.335602 | 2026-04-05 01:42:05.335786 | PLAY [Post cleanup play] 2026-04-05 01:42:05.344442 | 2026-04-05 01:42:05.344548 | TASK [Set cloud fact (Zuul deployment)] 2026-04-05 01:42:05.412795 | orchestrator | ok 2026-04-05 01:42:05.424715 | 2026-04-05 01:42:05.424842 | TASK [Set cloud fact (local deployment)] 2026-04-05 01:42:05.459215 | orchestrator | skipping: Conditional result was False 2026-04-05 01:42:05.475911 | 2026-04-05 01:42:05.476162 | TASK [Clean the cloud environment] 2026-04-05 01:42:06.086288 | orchestrator | 2026-04-05 01:42:06 - clean up servers 2026-04-05 01:42:06.824555 | orchestrator | 2026-04-05 01:42:06 - testbed-manager 2026-04-05 01:42:06.916675 | orchestrator | 2026-04-05 01:42:06 - testbed-node-4 2026-04-05 01:42:06.998008 | orchestrator | 2026-04-05 01:42:06 - testbed-node-0 2026-04-05 01:42:07.081333 | orchestrator | 2026-04-05 01:42:07 - testbed-node-1 2026-04-05 01:42:07.169657 | orchestrator | 2026-04-05 01:42:07 - testbed-node-2 2026-04-05 01:42:07.256093 | orchestrator | 2026-04-05 01:42:07 - testbed-node-5 2026-04-05 01:42:07.361062 | orchestrator | 2026-04-05 01:42:07 - testbed-node-3 2026-04-05 01:42:07.449706 | orchestrator | 2026-04-05 01:42:07 - clean up keypairs 2026-04-05 01:42:07.471762 | orchestrator | 2026-04-05 01:42:07 - testbed 2026-04-05 01:42:07.500060 | orchestrator | 2026-04-05 01:42:07 - wait for servers to be gone 2026-04-05 01:42:20.631200 | orchestrator | 2026-04-05 01:42:20 - clean up ports 2026-04-05 01:42:20.816026 | orchestrator | 2026-04-05 01:42:20 - 45b161c7-d4c7-4e10-ac2d-f1372a2e6b89 2026-04-05 01:42:21.098649 | orchestrator | 2026-04-05 01:42:21 - 6a98f27f-1c06-4bd9-a845-3b6170e577cb 2026-04-05 01:42:21.377971 | orchestrator | 2026-04-05 01:42:21 - 8ab79469-bfbd-4190-b0cb-8ddb2c6a71b1 2026-04-05 01:42:21.622715 | orchestrator | 2026-04-05 01:42:21 - b73d39c7-effe-4fe0-a755-aa2a39cc6f98 2026-04-05 01:42:21.852551 | orchestrator | 2026-04-05 01:42:21 - b8a0b284-9bdd-40cb-bd06-90e47090365c 2026-04-05 01:42:22.253975 | orchestrator | 2026-04-05 01:42:22 - ee0bc8b3-5246-4b3e-97b9-1f196e3281f4 2026-04-05 01:42:22.459351 | orchestrator | 2026-04-05 01:42:22 - f04b57f3-5912-47fc-a04f-bd4082ea3ed8 2026-04-05 01:42:22.668592 | orchestrator | 2026-04-05 01:42:22 - clean up volumes 2026-04-05 01:42:22.798822 | orchestrator | 2026-04-05 01:42:22 - testbed-volume-1-node-base 2026-04-05 01:42:22.837263 | orchestrator | 2026-04-05 01:42:22 - testbed-volume-5-node-base 2026-04-05 01:42:22.878246 | orchestrator | 2026-04-05 01:42:22 - testbed-volume-2-node-base 2026-04-05 01:42:22.921656 | orchestrator | 2026-04-05 01:42:22 - testbed-volume-0-node-base 2026-04-05 01:42:22.965512 | orchestrator | 2026-04-05 01:42:22 - testbed-volume-3-node-base 2026-04-05 01:42:23.015027 | orchestrator | 2026-04-05 01:42:23 - testbed-volume-4-node-base 2026-04-05 01:42:23.061439 | orchestrator | 2026-04-05 01:42:23 - testbed-volume-manager-base 2026-04-05 01:42:23.102309 | orchestrator | 2026-04-05 01:42:23 - testbed-volume-6-node-3 2026-04-05 01:42:23.147767 | orchestrator | 2026-04-05 01:42:23 - testbed-volume-8-node-5 2026-04-05 01:42:23.187566 | orchestrator | 2026-04-05 01:42:23 - testbed-volume-7-node-4 2026-04-05 01:42:23.230595 | orchestrator | 2026-04-05 01:42:23 - testbed-volume-1-node-4 2026-04-05 01:42:23.272923 | orchestrator | 2026-04-05 01:42:23 - testbed-volume-3-node-3 2026-04-05 01:42:23.314900 | orchestrator | 2026-04-05 01:42:23 - testbed-volume-0-node-3 2026-04-05 01:42:23.356135 | orchestrator | 2026-04-05 01:42:23 - testbed-volume-5-node-5 2026-04-05 01:42:23.402795 | orchestrator | 2026-04-05 01:42:23 - testbed-volume-4-node-4 2026-04-05 01:42:23.442760 | orchestrator | 2026-04-05 01:42:23 - testbed-volume-2-node-5 2026-04-05 01:42:23.488154 | orchestrator | 2026-04-05 01:42:23 - disconnect routers 2026-04-05 01:42:23.601314 | orchestrator | 2026-04-05 01:42:23 - testbed 2026-04-05 01:42:24.577702 | orchestrator | 2026-04-05 01:42:24 - clean up subnets 2026-04-05 01:42:24.630644 | orchestrator | 2026-04-05 01:42:24 - subnet-testbed-management 2026-04-05 01:42:24.809522 | orchestrator | 2026-04-05 01:42:24 - clean up networks 2026-04-05 01:42:24.970722 | orchestrator | 2026-04-05 01:42:24 - net-testbed-management 2026-04-05 01:42:25.257420 | orchestrator | 2026-04-05 01:42:25 - clean up security groups 2026-04-05 01:42:25.301602 | orchestrator | 2026-04-05 01:42:25 - testbed-management 2026-04-05 01:42:25.411958 | orchestrator | 2026-04-05 01:42:25 - testbed-node 2026-04-05 01:42:25.519621 | orchestrator | 2026-04-05 01:42:25 - clean up floating ips 2026-04-05 01:42:25.557017 | orchestrator | 2026-04-05 01:42:25 - 81.163.192.221 2026-04-05 01:42:25.904769 | orchestrator | 2026-04-05 01:42:25 - clean up routers 2026-04-05 01:42:26.018804 | orchestrator | 2026-04-05 01:42:26 - testbed 2026-04-05 01:42:27.038207 | orchestrator | ok: Runtime: 0:00:21.055533 2026-04-05 01:42:27.043009 | 2026-04-05 01:42:27.043197 | PLAY RECAP 2026-04-05 01:42:27.043336 | orchestrator | ok: 6 changed: 2 unreachable: 0 failed: 0 skipped: 7 rescued: 0 ignored: 0 2026-04-05 01:42:27.043399 | 2026-04-05 01:42:27.179403 | POST-RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/post.yml@main] 2026-04-05 01:42:27.180565 | POST-RUN START: [untrusted : github.com/osism/testbed/playbooks/cleanup.yml@main] 2026-04-05 01:42:27.917452 | 2026-04-05 01:42:27.917614 | PLAY [Cleanup play] 2026-04-05 01:42:27.933420 | 2026-04-05 01:42:27.933554 | TASK [Set cloud fact (Zuul deployment)] 2026-04-05 01:42:28.003646 | orchestrator | ok 2026-04-05 01:42:28.013371 | 2026-04-05 01:42:28.013529 | TASK [Set cloud fact (local deployment)] 2026-04-05 01:42:28.048601 | orchestrator | skipping: Conditional result was False 2026-04-05 01:42:28.064310 | 2026-04-05 01:42:28.064473 | TASK [Clean the cloud environment] 2026-04-05 01:42:29.238681 | orchestrator | 2026-04-05 01:42:29 - clean up servers 2026-04-05 01:42:29.721841 | orchestrator | 2026-04-05 01:42:29 - clean up keypairs 2026-04-05 01:42:29.741683 | orchestrator | 2026-04-05 01:42:29 - wait for servers to be gone 2026-04-05 01:42:29.787066 | orchestrator | 2026-04-05 01:42:29 - clean up ports 2026-04-05 01:42:29.866174 | orchestrator | 2026-04-05 01:42:29 - clean up volumes 2026-04-05 01:42:29.932649 | orchestrator | 2026-04-05 01:42:29 - disconnect routers 2026-04-05 01:42:29.966290 | orchestrator | 2026-04-05 01:42:29 - clean up subnets 2026-04-05 01:42:29.986781 | orchestrator | 2026-04-05 01:42:29 - clean up networks 2026-04-05 01:42:30.164099 | orchestrator | 2026-04-05 01:42:30 - clean up security groups 2026-04-05 01:42:30.200661 | orchestrator | 2026-04-05 01:42:30 - clean up floating ips 2026-04-05 01:42:30.226872 | orchestrator | 2026-04-05 01:42:30 - clean up routers 2026-04-05 01:42:30.610325 | orchestrator | ok: Runtime: 0:00:01.395959 2026-04-05 01:42:30.614569 | 2026-04-05 01:42:30.614712 | PLAY RECAP 2026-04-05 01:42:30.614811 | orchestrator | ok: 2 changed: 1 unreachable: 0 failed: 0 skipped: 1 rescued: 0 ignored: 0 2026-04-05 01:42:30.614923 | 2026-04-05 01:42:30.749616 | POST-RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/cleanup.yml@main] 2026-04-05 01:42:30.752329 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post-fetch.yaml@main] 2026-04-05 01:42:31.526382 | 2026-04-05 01:42:31.526563 | PLAY [Base post-fetch] 2026-04-05 01:42:31.556935 | 2026-04-05 01:42:31.557270 | TASK [fetch-output : Set log path for multiple nodes] 2026-04-05 01:42:31.613416 | orchestrator | skipping: Conditional result was False 2026-04-05 01:42:31.620446 | 2026-04-05 01:42:31.620605 | TASK [fetch-output : Set log path for single node] 2026-04-05 01:42:31.670748 | orchestrator | ok 2026-04-05 01:42:31.676938 | 2026-04-05 01:42:31.677058 | LOOP [fetch-output : Ensure local output dirs] 2026-04-05 01:42:32.171829 | orchestrator -> localhost | ok: "/var/lib/zuul/builds/4d3ed407a6834c7fa69ad074083dc131/work/logs" 2026-04-05 01:42:32.445057 | orchestrator -> localhost | changed: "/var/lib/zuul/builds/4d3ed407a6834c7fa69ad074083dc131/work/artifacts" 2026-04-05 01:42:32.724226 | orchestrator -> localhost | changed: "/var/lib/zuul/builds/4d3ed407a6834c7fa69ad074083dc131/work/docs" 2026-04-05 01:42:32.748663 | 2026-04-05 01:42:32.748812 | LOOP [fetch-output : Collect logs, artifacts and docs] 2026-04-05 01:42:33.713850 | orchestrator | changed: .d..t...... ./ 2026-04-05 01:42:33.714239 | orchestrator | changed: All items complete 2026-04-05 01:42:33.714303 | 2026-04-05 01:42:34.433898 | orchestrator | changed: .d..t...... ./ 2026-04-05 01:42:35.187207 | orchestrator | changed: .d..t...... ./ 2026-04-05 01:42:35.221788 | 2026-04-05 01:42:35.221959 | LOOP [merge-output-to-logs : Move artifacts and docs to logs dir] 2026-04-05 01:42:35.263728 | orchestrator | skipping: Conditional result was False 2026-04-05 01:42:35.266627 | orchestrator | skipping: Conditional result was False 2026-04-05 01:42:35.279396 | 2026-04-05 01:42:35.279496 | PLAY RECAP 2026-04-05 01:42:35.279566 | orchestrator | ok: 3 changed: 2 unreachable: 0 failed: 0 skipped: 2 rescued: 0 ignored: 0 2026-04-05 01:42:35.279601 | 2026-04-05 01:42:35.423337 | POST-RUN END RESULT_NORMAL: [trusted : github.com/osism/zuul-config/playbooks/base/post-fetch.yaml@main] 2026-04-05 01:42:35.424408 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post.yaml@main] 2026-04-05 01:42:36.199959 | 2026-04-05 01:42:36.200239 | PLAY [Base post] 2026-04-05 01:42:36.215301 | 2026-04-05 01:42:36.215441 | TASK [remove-build-sshkey : Remove the build SSH key from all nodes] 2026-04-05 01:42:37.226562 | orchestrator | changed 2026-04-05 01:42:37.235512 | 2026-04-05 01:42:37.235631 | PLAY RECAP 2026-04-05 01:42:37.235707 | orchestrator | ok: 1 changed: 1 unreachable: 0 failed: 0 skipped: 0 rescued: 0 ignored: 0 2026-04-05 01:42:37.235781 | 2026-04-05 01:42:37.357338 | POST-RUN END RESULT_NORMAL: [trusted : github.com/osism/zuul-config/playbooks/base/post.yaml@main] 2026-04-05 01:42:37.359735 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post-logs.yaml@main] 2026-04-05 01:42:38.154744 | 2026-04-05 01:42:38.154955 | PLAY [Base post-logs] 2026-04-05 01:42:38.165849 | 2026-04-05 01:42:38.165992 | TASK [generate-zuul-manifest : Generate Zuul manifest] 2026-04-05 01:42:38.621735 | localhost | changed 2026-04-05 01:42:38.637354 | 2026-04-05 01:42:38.637542 | TASK [generate-zuul-manifest : Return Zuul manifest URL to Zuul] 2026-04-05 01:42:38.675949 | localhost | ok 2026-04-05 01:42:38.682551 | 2026-04-05 01:42:38.682755 | TASK [Set zuul-log-path fact] 2026-04-05 01:42:38.702393 | localhost | ok 2026-04-05 01:42:38.717290 | 2026-04-05 01:42:38.717451 | TASK [set-zuul-log-path-fact : Set log path for a build] 2026-04-05 01:42:38.755784 | localhost | ok 2026-04-05 01:42:38.762086 | 2026-04-05 01:42:38.762252 | TASK [upload-logs : Create log directories] 2026-04-05 01:42:39.286135 | localhost | changed 2026-04-05 01:42:39.291552 | 2026-04-05 01:42:39.291727 | TASK [upload-logs : Ensure logs are readable before uploading] 2026-04-05 01:42:39.807313 | localhost -> localhost | ok: Runtime: 0:00:00.007037 2026-04-05 01:42:39.814712 | 2026-04-05 01:42:39.814901 | TASK [upload-logs : Upload logs to log server] 2026-04-05 01:42:40.393786 | localhost | Output suppressed because no_log was given 2026-04-05 01:42:40.397745 | 2026-04-05 01:42:40.397935 | LOOP [upload-logs : Compress console log and json output] 2026-04-05 01:42:40.451196 | localhost | skipping: Conditional result was False 2026-04-05 01:42:40.455959 | localhost | skipping: Conditional result was False 2026-04-05 01:42:40.470411 | 2026-04-05 01:42:40.470681 | LOOP [upload-logs : Upload compressed console log and json output] 2026-04-05 01:42:40.541219 | localhost | skipping: Conditional result was False 2026-04-05 01:42:40.541513 | 2026-04-05 01:42:40.556084 | localhost | skipping: Conditional result was False 2026-04-05 01:42:40.561225 | 2026-04-05 01:42:40.561372 | LOOP [upload-logs : Upload console log and json output]